How we research, write, and quality-check every review.
Broker Tested Reviews is committed to accurate, unbiased, and transparent trading platform reviews. This page documents how every article on the site is researched, written, reviewed, and (when necessary) corrected.
Our review pipeline
Every article on the site passes through a sequential pipeline of independent steps. No article skips a step, and no human can override the critic pass without leaving a written justification in the editorial log.
- Source-grounded research. We pull primary source data: regulator registers, broker fee schedules, our own live test logs, and community sentiment from Trustpilot and Reddit. AI is used as a research assistant, not as a generator of unverified claims.
- First draft (writer). A draft is generated with the source-grounded data attached, structured to include data tables, citations, and first-person experience language drawn from real test logs.
- Critic pass (independent). A separate model scores the draft on five weighted dimensions (see scoring rubric below). It checks for hallucinated claims, unsupported numbers, and missing citations.
- Lead magnet selection. The most relevant reader resource is attached to each article (checklist, fee spreadsheet, risk template, or quiz).
- Quality gate. The article is either auto-published (score ≥75), sent to a human review queue (55-74), or rejected (<55). See thresholds below.
- Human review (when triggered). Drafts in the 55-74 range are reviewed by Alex Rivera, who can either approve the article with edits, send it back to the writer with notes, or reject it outright.
- Publication + monitoring. Once live, the article is monitored for reader error reports, regulatory changes affecting the platform, and any drift between the article's claims and our ongoing test data.
Quality scoring
Every article is scored on a 100-point scale by an independent quality review pass. The weights are chosen so that "trustworthiness" signals (E-E-A-T + accuracy) account for 65% of the total — with the explicit goal of penalizing articles that read polished but fail to ground their claims.
| Criterion | Weight | What it measures |
|---|---|---|
| E-E-A-T strength | 35% | First-hand experience language, author credentials cited, methodology link present, named team accountability. |
| Factual accuracy | 30% | Citation quality, regulator-verified data, fee numbers traceable to a primary source, no hallucinated claims. |
| Originality / Information Gain | 20% | Original data tables, unique insights not in the source material, comparison context that a reader cannot easily find elsewhere. |
| Helpfulness | 10% | Actionable for a real trader, clear "who this is for / who this is not for", honest pros and cons. |
| YMYL compliance | 5% | Disclaimers present, no direct financial advice, risk warning visible, affiliate relationships disclosed inline. |
Articles scoring 75 or above are auto-published. Articles scoring 55-74 go to a human review queue and are either approved with edits, sent back, or rejected. Articles scoring below 55 are rejected outright and do not appear on the site. Our target rejection rate is ~35%.
Author standards
All published content is reviewed by Alex Rivera, CFA, our lead analyst. Alex's qualifications are publicly documented and link from the byline on every article:
- 12+ years of hands-on trading platform testing (2014-2026).
- Chartered Financial Analyst (CFA) charterholder.
- Former proprietary trader (equity and FX markets).
- 50+ platforms tested with funded accounts (2020-2026).
Where reviews include contributions from other analysts on the team, those contributors are credited inline. Anonymous reviews are not published.
Corrections policy
We treat factual errors as a serious problem, not as a PR risk to be managed. When a reader reports an inaccuracy, we:
- Acknowledge the report within 2 business days.
- Investigate against primary sources (regulator register, our own test logs, official documentation).
- Either publish a correction with a dated changelog at the bottom of the article, or explain in writing why the original text is supported by the evidence.
- Where a correction materially changes a recommendation, we re-score the article and update the headline rating accordingly.
Reports can be sent to contact@brokertestedreviews.com with the article URL, the specific claim, and your supporting evidence. We do not require readers to provide their real name to submit a correction.
Independence and conflicts of interest
Broker Tested Reviews earns revenue through affiliate partnerships. To prevent revenue from influencing editorial, we maintain a written set of independence rules:
- Affiliate relationships never influence scores or rankings. The critic pass that scores a draft does not have access to the list of affiliate partners. The scoring rubric is applied identically to every platform.
- Every affiliate link is disclosed inline. Each CTA button using an affiliate link shows an "Affiliate partnership" label. The article-level disclosure is visible above the fold.
- Quality scoring runs before monetization. An article is scored, gated, and approved for publication before any affiliate or lead-capture logic runs. If a draft is rejected, the monetization layer never sees it.
- We publish negative reviews of platforms whose affiliate programs we have access to. If we did not do this, the policy above would be meaningless.
- No "review trade" arrangements. We do not accept "free review in exchange for traffic" deals. We do not accept gifts, expense-paid trips, or comped accounts.
Current affiliate disclosures
As of this policy's last update, our largest current affiliate partnership is with Zephyr AI, a broker-matching service. Zephyr CTAs appear in multiple placements across the site (hero, banner, sidebar, post-table, and sticky mobile bar). These placements do not affect the score of any specific platform that Zephyr matches users to.
How we use AI in the editorial pipeline
AI is part of how we produce reviews. Being honest about this is important: we use AI as research and drafting assistance, not as a substitute for human verification. Specifically:
- Research data extraction (regulator pages, broker fee schedules) is automated.
- First drafts are AI-generated against source-grounded research bundles — not generated from prompts alone.
- The critic pass that scores drafts is a separate AI model with different instructions, deliberately to introduce friction.
- Every article in the 55-74 score range is reviewed by a human before publication.
- No article is published without at least one human (Alex) accountable for it on the byline.
We do not consider this approach controversial — we consider it the right way to build a trustworthy publication at the scale needed to cover this category. Our methodology page documents the testing protocol that grounds the AI-assisted research.
Take-down and right-of-reply
If you are a platform or vendor mentioned in a review and you believe a specific factual claim is inaccurate, contact contact@brokertestedreviews.com. We will investigate against primary evidence. If your evidence supports a correction, we will publish one. If the original text is supported by the evidence, the review stands — but we are happy to publish a brief right-of-reply paragraph inside the article, attributed to your team.
We do not remove negative reviews on request. We do correct factual errors.
Contact for editorial issues
Email contact@brokertestedreviews.com. For partnership inquiries (which do not influence editorial), use partnerships@brokertestedreviews.com.