How we test AI trading bots and brokers.
Every platform on this site is reviewed using the same 6-month live-funded testing protocol. This page documents that protocol in full: the trading volume, the data we collect, the regulators we cross-check against, and the 8-criteria scoring rubric that decides whether a review gets a positive recommendation or a "skip it."
The 6-month live testing protocol
Every bot or broker we recommend has gone through three sequential phases of live testing. We do not pass platforms through faster than this — if a partner asks us to publish a "Day 30 first impressions" review, we say no. Six months of live exposure is the minimum cycle that surfaces the things that matter: stop-loss execution under stress, withdrawal friction, and whether spreads widen during the U.S. open or only during the Asian session.
Phase 1 — Account setup (Week 1)
- Open a real funded account with the minimum deposit required by the platform (no demo accounts).
- Document the KYC/AML compliance process, including ID verification, address verification, and source-of-funds checks.
- Record account opening timeline from signup to first available trade.
- Capture any onboarding friction: rejected documents, unexplained delays, redirected to a different entity.
Phase 2 — Active trading (Months 1-5)
- Execute a minimum of 200 trades across multiple asset classes (FX, equities/CFDs, indices, crypto where applicable).
- Record effective spreads (not advertised spreads) during peak and off-peak hours, sampled across the London, New York, and Asian sessions.
- Measure execution speed and slippage on market orders at multiple position sizes.
- Test limit orders, stop losses, and trailing stops under normal and high-volatility conditions.
- Monitor platform stability during macro events (CPI prints, FOMC, NFP) and at session opens.
- For AI bots specifically: log every algorithm decision against the bot's stated strategy and flag any deviations.
Phase 3 — Withdrawals and support (Month 6)
- Submit a minimum of 3 withdrawal requests via different methods (bank wire, card, e-wallet where available).
- Document withdrawal processing times, intermediary fees, and currency conversion spreads.
- Contact customer support via every available channel (live chat, email, phone) with both routine and edge-case questions.
- Record response time, resolution quality, and whether the platform escalates correctly.
Scoring criteria
Each platform is rated on 8 standardized criteria. Weights add to 100% and have been chosen so that "safety + cost" together account for 40% of the score — the two areas where retail traders are most often misled.
| Category | Weight | What we measure |
|---|---|---|
| Regulation & Safety | 20% | Licensing tier, segregated client funds, compensation schemes (FSCS, ICF, SIPC), parent entity transparency. |
| Trading Costs | 20% | Effective spreads, commissions, overnight financing, inactivity fees, currency conversion spreads, withdrawal fees. |
| Platform & Tools | 15% | Charting capability, available order types, mobile app stability, API access, integrations. |
| Execution Quality | 15% | Order fill speed, slippage distribution, requote frequency, behavior during high-volatility events. |
| Account Types | 10% | Minimum deposits, leverage options, available account variations (Islamic, professional, joint). |
| Research & Education | 10% | Quality of market analysis, depth of tutorials, webinar programming, tool quality. |
| Customer Support | 5% | Response time per channel, resolution quality, language coverage, weekend availability. |
| Withdrawal Experience | 5% | Processing time, available methods, hidden fees, friction on first withdrawal vs. subsequent ones. |
Articles must score at least 75/100 to be auto-published. Scores between 55 and 74 are sent to a human review queue. Scores below 55 are rejected and never go live. Our target rejection rate is ~35% — this is intentional. A high rejection rate is a feature, not a bug: it means the bar to appear on this site is meaningfully higher than "we ran an AI rewrite of a press release."
Data sources
Our reviews draw from four data layers, weighted in this order:
- Primary (test data). Our own 6-month live-funded test logs. These are the numbers that appear in every comparison table on the site.
- Regulatory (primary registers). FCA (UK), ASIC (Australia), CySEC (Cyprus), SEC and FINRA (US), MAS (Singapore), BaFin (Germany) and other jurisdiction-specific registers. We verify the license number directly — not the broker's claim about it.
- Community sentiment. Trustpilot reviews (filtered for verified purchases), Reddit (r/Forex, r/Trading, r/algotrading) over a rolling 90-day window, and broker forum discussions. We use this to triangulate against our own test experience.
- Official documentation. The platform's own legal terms, fee schedules, and KYC documentation. Used only to cite obligations we then verify against live experience.
What we deliberately do not do
- Demo-account reviews. Demo spreads and execution are not the same as live spreads and execution — in some cases they differ by an order of magnitude. We do not publish reviews based on demo testing.
- Press-release rewriting. A platform's marketing copy is never used as the basis for a review. It can be cited as a source of claims that we then verify or refute.
- Sponsored reviews. No platform pays for a review slot or a specific score. Affiliate partnerships do not buy ranking position. See Editorial Policy for our independence rules.
- Anonymous review claims. Every review is attributed to a named human analyst with credentials disclosed on our team page.
How we handle conflicts of interest
Broker Tested Reviews earns revenue from affiliate partnerships, including Zephyr AI. Where a review covers a platform that is also an affiliate partner, we disclose the relationship at the top of the review and reiterate it inline with every link. The scoring rubric is applied identically to affiliate and non-affiliate platforms — the critic pass that scores a draft does not know which platforms are partners.
We have published negative reviews of platforms whose affiliate programs we have access to. We will keep doing this. If you find a review that reads as suspiciously positive, email us directly at contact@brokertestedreviews.com and we will publish a re-test.
How often we re-test
Markets and platforms change. We re-test every covered platform at least once per year, and immediately on any of the following triggers:
- Regulatory action against the platform (suspension, fine, license revocation).
- Material change in fee structure, leverage, or account types.
- Documented withdrawal issues in our reader inbox or in our community sentiment data.
- Ownership change or rebrand.
The "Last tested" date at the top of every review reflects the most recent full re-test. The "Last updated" date reflects any factual correction or addition since.
Reporting an error or a missed test
We treat factual errors as a serious problem and publish a correction with a dated changelog at the bottom of the affected article. To report one, email contact@brokertestedreviews.com with the article URL, the specific claim, and your supporting evidence. We aim to publish the correction within 5 business days.