Editorial transparency

Pitchbase methodology

How we test tools, calibrate our scoring and cite our sources. So you can challenge our comparisons and reproduce our evaluations.

How we evaluate tools in our comparisons

All our comparisons (Pitchbase vs Hyperbound, vs MuchBetter, vs VendMieux, vs ProspectIA, vs EAGR) use the same 8 criteria evaluation grid. Every tool tested gets a trial account (free or paid), and we run at least 5 complete simulations per tool on standardized scenarios (SaaS cold call, product demo, price objection handling).

The 8 evaluation criteria

1

Voice realism

Latency, naturalness, emotions, barge in capability.

2

French language support

Native FR voices, adapted prompts, no US calque.

3

Scenario variety

Cold call, warm call, discovery, demo, closing, gatekeeper.

4

Feedback depth

Multi axis scoring, cited examples, actionable suggestions.

5

Manager features

Team view, leaderboard, coaching plans, exports.

6

Integrations

CRM (HubSpot, Salesforce), SSO, CSV exports, API.

7

Pricing

Price per user, free plan, annual contract, estimated ROI.

8

Onboarding

Onboarding flow, documentation quality, learning curve.

How Pitchbase computes its scores (Sales DNA)

Every Pitchbase simulation produces multi pass feedback powered by GPT 4.1 mini, with 4 sequential JSON passes: granular scores, objections, coaching, deep feedback. The scoring is organized around the Sales DNA, a 6 axis radar: Opening, Discovery, Pitch, Objections, Negotiation, Closing.

How each axis is scored

O

Opening

Ability to grab attention in less than 30 seconds: relevance of context, tone, engagement question.

D

Discovery

Quality of questioning (open vs closed), depth (SPIN, BANT, MEDDIC), active listening.

P

Pitch

Structure (problem, solution, proof), prospect personalization, customer cases cited, differentiation.

B

Objections

Recognition, reformulation (LAER, CRAC), relevance of response, persistence without aggression.

N

Negotiation

Price anchoring, concession management, value defense, creativity (tiers, options).

C

Closing

Explicit ask, next steps fixed, prospect commitment (date, decision maker, document).

How we build AI personas

Each Pitchbase persona is defined by 8 parameters: name, role, sector, industry, personality traits, context, resistance level (1 to 5), and an enrichment system (latent concerns, language tics, hidden needs). Resistance levels are calibrated on a progressive scale:

Level 1 (very easy): warm prospect, few objections, quickly accepts to continue the conversation.

Level 2 (easy): interested but distracted, asks to rephrase, 1 mild objection.

Level 3 (intermediate): demanding, 2 to 3 structured objections, expects clear value demonstration.

Level 4 (hard): skeptical, presses on ROI, compares with competitors, 3 to 4 strong objections.

Level 5 (expert): seasoned decision maker, hangs up if the opening is weak, requires data and similar customer cases.

Calibration is reviewed every quarter by comparing user assigned scores with their actual call results.

Our voice tech stack

For transparency, here is the stack that powers Pitchbase simulations. We update this section on every major architecture change.

STT (speech to text)

Deepgram Nova 3 (live)

Target latency under 300 ms, final and partial detection, French and English support.

LLM (reasoning)

OpenAI GPT 4.1 mini (streaming)

Token by token streaming, system prompts built client side (full transparency).

TTS (text to speech)

Cartesia Sonic 3

Bilingual voice pool: 11 FR voices (6 male, 5 female) + 10 EN voices (5 male, 5 female), emotional modulation.

AI feedback

GPT 4.1 mini multi pass

4 sequential JSON passes: core scores, objections, coaching, deep feedback.

Sources and bibliography

We rely on 4 main source categories, explicitly cited in our articles:

Academic research on adult learning: Hermann Ebbinghaus (forgetting curve, 1885), Donald Kirkpatrick (4 levels of training evaluation, 1959), David Kolb (experiential learning cycle, 1984).

Classic B2B sales literature: Neil Rackham (SPIN Selling, 1988, based on 35,000 calls analyzed), Jack Napoli (MEDDIC at PTC, 1996), BANT method (IBM, 1959), Mike Bosworth (Solution Selling, 1995).

Public industry studies: Gartner Magic Quadrant Sales Engagement, Forrester Wave Sales Enablement, HubSpot Sales Trends, Salesforce State of Sales, LinkedIn State of Sales, RAIN Group, CSO Insights.

Anonymized internal data: aggregated Pitchbase usage statistics across our entire user base, without individual identification. Always presented as observations, not generalizable studies.

Acknowledged limits and conflicts of interest

We publish comparisons in which Pitchbase is one of the evaluated tools. This is an obvious conflict of interest, and we handle it as follows:

  • The evaluation grid is defined before the tests, not adjusted afterwards to favor one tool.
  • When Pitchbase is recommended, it is always qualified by context (for example, French speaking B2B SMBs), never in absolute terms.
  • We cite competitor strengths without minimizing them (Hyperbound on US enterprise, SecondNature on video, etc.).
  • If one of our claims is factually incorrect (price change, new competitor feature, product misunderstanding), we correct within 72 hours of the report.

Known technical limits of Pitchbase scoring:

  • Scoring relies on GPT 4.1 mini, which can sometimes be lenient on very short dialogues (under 90 seconds). We recommend sessions of 5 minutes minimum.
  • The Pitch and Closing axes are less discriminating than Discovery and Objections, because they depend more on product context. Weighting is being adjusted.
  • The system does not measure non verbal language (video not currently supported).

Report an error or suggest a correction

Found a factual error, an outdated number, a competitor feature we missed, or a claim that needs nuance? Email us at hello@pitchbase.app with the URL and the suggested correction. We respond within 72 business hours and publish corrections (with explicit mention if it changes a conclusion).

Last update of this page: April 2026.

Try Pitchbase yourself

The best way to judge a simulation tool is to use it. Free plan, 3 simulations per month, no credit card required.

Start for free

Or book a demo for teams.