The Sentence ChatGPT Says About Your Pricing
Open ChatGPT. Type "how much does [your product] cost?" Read the answer. If you build a B2B SaaS product, the answer will fall into one of four categories:
- The exact answer. "The Pro tier is $19/month, the Agency tier is $49/month, with a free tier for individuals." This is the best case. The AI engine extracted your pricing accurately.
- The wrong answer. A specific price, but not the current price. Maybe it's the price you charged in 2023. Maybe it's a number from a competitor with a similar name. The buyer reading this answer is now mis-informed about you.
- The vague answer. "Pricing is available on request" or "they offer enterprise plans." This means the AI gave up trying to extract a number from your page. The buyer reads this and either contacts you or — more often — moves on to a competitor whose pricing the AI could quote.
- No answer at all. The AI changes the subject or recommends a different product. This is the worst case: it tells you the model couldn't even hold onto your pricing question and substituted a competitor's.
I've run this check on about sixty B2B SaaS sites. Roughly 40% get the exact answer. About 30% get the wrong answer. About 20% get the vague answer. The remaining 10% get no answer at all. This means six out of ten companies have a pricing AI-extraction problem they don't know about.
This is not a niche concern. The pricing question is one of the highest-frequency follow-ups in AI buyer-research sessions. When a buyer asks ChatGPT "best tool for X," the second prompt is almost always "how much does [winner] cost?" The pricing page is the input that produces the answer. If the input is unreadable to the AI, the answer is unreliable to the buyer. If the answer is unreliable, the buyer recalculates, and the recalculation rarely lands in your favor.
The Two Pricing Pages You Need to Stop Confusing
There's a foundational confusion at the root of most pricing-page failures: the pricing page has to do two jobs at once, and the design decisions that optimize for one often hurt the other. The two jobs are:
Two distinct readers, two distinct jobs
The human visitor needs to scan tiers visually, find their fit, and feel that the pricing is fair. They want a clear visual hierarchy, helpful comparison features, an obvious "popular" recommendation.
The AI engine needs to extract three things: the price, the buyer it's for, and the feature delta. It reads the page sequentially as text and extracts spans. It does not see your visual hierarchy.
Pages that optimize aggressively for visual scanning often confuse AI extraction. Pages that optimize for AI extraction can read robotic to humans. The good news: the gap between the two requirements is much smaller than it seems. A pricing page can serve both, but only if it's built deliberately for both.
The Six-Element Pricing Page That Wins Both Audiences
Here is the structure I see consistently extract correctly into AI answers and convert well for human visitors. I'll walk through each element.
Element 1: The one-sentence price summary, above the tiers
Open with one sentence that summarizes pricing. Not a tier list, not a feature comparison — one declarative sentence that an AI can extract verbatim into an answer.
FAILS EXTRACTION
"Pricing that scales with you. Choose the plan that fits."
EXTRACTS CLEANLY
"Free for solo users. $19/mo for individual professionals on Pro. $49/mo per seat on Agency for teams. No usage caps."
The second version contains the four data points an AI engine needs (free tier, Pro price, Agency price, no caps) in a single sentence. When ChatGPT or Perplexity answers a pricing question, this sentence is exactly what they extract. Writing it deliberately means the AI's answer reads the way you wrote it, not the way it guessed.
Element 2: Tier blocks with three structural elements each
Each pricing tier should have three elements in a consistent structure: the price (with billing period), a one-line "best for [persona]," and a top-3 feature list.
Free
$0 · 5 analyses per month
Best for: solo founders auditing one or two pages
- 5 page analyses per month
- Full reports, all eight dimensions
- No credit card required
This format is doing several jobs simultaneously. The price line is extractable. The "best for" line is what AI engines pull when answering "which tier should I pick?" The feature list is what they pull when answering "what's included." The three rows are parallel across tiers, which makes structured extraction reliable.
Critical rule: every tier needs a "best for". Most pricing pages skip this on the highest tier ("Enterprise — let's talk") and the lowest tier ("Free — try it out"). Both of those are missed signal. The "best for" line on each tier is the single most extractable element of the entire page for AI recommendation answers.
Element 3: The "what's the difference" delta block
Below the tier blocks, add a short section that explicitly names the upgrade triggers. AI engines extract this into "when should I upgrade?" answers — and human visitors reach for it constantly.
Format:
When to upgrade from Free → Pro
- You analyze more than 5 pages per month
- You need PDF exports for client reports
- You want re-analysis history to track changes over time
When to upgrade from Pro → Agency
- You're managing 5+ client accounts
- You need API access for embedded analysis
- You want white-labeled reports
This block is uniquely valuable for AI extraction because it provides the conditional logic AI engines need to give specific recommendations. Without this block, the AI defaults to "the popular plan" or "the middle tier" — which may not match the buyer's actual needs.
Element 4: The volume / seat math, written explicitly
If your pricing scales by seats, usage, or volume, the math has to be on the page in plain text. Don't bury it in a hover tooltip. Don't put it behind a "calculate your cost" button.
Bad example: "Pro: $19/seat/month — billed annually." This makes a buyer (or AI) do math.
Good example: "Pro: $19 per seat per month. A 5-person team is $95/month or $1,140/year. Annual billing saves 15% (5-person team: $969/year)." This puts the calculation on the page in extractable form.
The difference matters because AI engines, even sophisticated ones, occasionally botch arithmetic when they have to compute an annual total. A team-size example removes the failure mode and gives the AI a concrete number to quote.
Element 5: The schema markup (the part 90% of pages skip)
This is the single highest-leverage technical change you can make. Add JSON-LD schema with SoftwareApplication or Product type, with each tier as an Offer.
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "roast.page",
"applicationCategory": "Landing page analysis",
"offers": [
{ "@type": "Offer", "name": "Free", "price": "0", "priceCurrency": "USD", "description": "5 analyses/month" },
{ "@type": "Offer", "name": "Pro", "price": "19", "priceCurrency": "USD", "description": "Unlimited, monthly" },
{ "@type": "Offer", "name": "Agency", "price": "49", "priceCurrency": "USD", "description": "Per seat, teams" }
]
}
AI engines treat schema as a structured fallback when text extraction is ambiguous. If your visible page is well-written, the schema rarely changes the answer. But when the visible page has formatting quirks — non-standard tier layouts, decorative text inserted between price and tier name, microcopy that confuses extraction — the schema is what saves you. We have a separate schema markup playbook for AI search with the full list of which schema types matter for which pages.
Element 6: A short FAQ targeting AI follow-up questions
Below the tiers, three to six FAQs that anticipate the questions a buyer asks the AI after the pricing question. Common ones:
- "Is there a free trial?" / "How long is the free trial?"
- "Can I change plans mid-month?"
- "What happens if I exceed my limits?"
- "Do you offer non-profit / startup / educational discounts?"
- "Do prices include tax?"
- "Can I get an annual invoice?"
Each FAQ should be one paragraph, plain answer, no marketing fluff. Wrap them in FAQPage schema markup. AI engines pull from FAQ schema preferentially when answering follow-up questions about specific pricing details.
The "Contact Us" Trap
The single most damaging pricing page move in 2026 is "contact us"-only pricing for a self-serve or PLG product. I'll explain the cost in concrete terms, because most teams underestimate it.
When a buyer asks ChatGPT "how much does X cost?" and your page returns "contact sales," the AI engine has three options:
- Quote a competitor whose pricing it does know.
- Hallucinate a price (which damages your trust if the buyer compares it to your real price later).
- Tell the buyer to "contact your sales team" — which the buyer interprets as "this product is expensive and complicated."
None of these are good for you. In our audits, hidden-pricing companies score 30–50% lower on AI search visibility for category-recommendation queries than transparent-pricing companies in the same category. The AI doesn't actively penalize you; it just routes around you.
When "contact us" is correct
Genuinely enterprise sales — six-figure contracts, custom implementation, multi-month procurement — really do need a contact-sales motion. The mistake is using "contact us" for self-serve products to avoid showing pricing. If your average contract size is below $50K/year and your sales cycle is under 60 days, you should publish pricing. The opacity costs you AI visibility, top-of-funnel intent, and buyer trust simultaneously.
The middle ground: publish a "starting at" price even on enterprise tiers. "Enterprise plans start at $5K/month with custom contracts" gives the AI engine a number to quote. The buyer who is too small for enterprise self-disqualifies. The buyer who fits books a call. The hidden pricing accomplishes neither.
Pricing Page Mistakes That Look Smart but Aren't
A few patterns I see that founders defend strongly but that quietly hurt both AI extraction and human conversion:
Mistake 1: The slider with monthly/annual toggles that requires JavaScript to render
If your pricing page renders the prices via JavaScript on user interaction, AI crawlers may see "Loading..." or default placeholder text. Render the default state as static HTML. JS-only pricing pages are an extraction nightmare.
Mistake 2: The "hot tier" that's labeled "Most Popular" without specifying who it's for
The "popular" badge is borderline noise to an AI engine. The "best for [specific persona]" label is signal. Use both. The badge guides the eye; the persona text gives the AI engine its recommendation hook.
Mistake 3: Custom currency / region pricing without a default
Some pages auto-detect the visitor's country and show only their local-currency pricing. AI crawlers often see a default that depends on their inferred location. The result: ChatGPT might quote a Brazilian-real price to a US buyer. Always show a default (USD or EUR) prominently, with localized prices in a clear secondary position.
Mistake 4: Burying enterprise pricing details
If your enterprise tier is "Talk to us," at least say what you talk about. "Enterprise plans include SSO, dedicated support, 99.9% SLA, custom contract terms, and pricing starting at $X based on team size." This is extractable, gives the buyer enough to qualify themselves, and signals seriousness. The "let's chat" link can still be the action — but the substance has to be on the page.
What This Looks Like Working in the Wild
Three pricing pages I think about often as good examples (linked because they're good, not because they pay anyone):
- Linear's pricing page — clean tier blocks, explicit per-seat math, "best for [team size]" microcopy on each tier, and a transparent enterprise pricing range. ChatGPT can quote Linear's pricing accurately and recommend tiers correctly.
- Vercel's pricing page — extensive but extractable. Each tier has clear price, target user, and the specific bandwidth/build limits an AI engine would need to give a recommendation. The complexity is real (usage-based pricing) but the structure handles it.
- Plausible's pricing page — opinionated and minimal. One slider, one annual default, no enterprise-tier obfuscation. AI engines extract it perfectly. Human conversion is high because the opinionation is a positioning choice that filters in the right buyer.
The common thread isn't visual style. It's structural clarity. Each of these pages is built so a robot reading it sequentially could give an accurate two-sentence summary, while a human scanning it visually finds their fit in under a minute. The two requirements are not in tension. They look like the same page, written deliberately.
The Buyer's Journey Has a New First Stop
The traditional buyer journey for B2B SaaS used to be: search Google, click 3 results, read 2 of them, navigate to pricing, evaluate, take action. The new buyer journey is: ask ChatGPT, read the AI summary, ask "how much does it cost?", and only then click — sometimes — to verify.
The pricing page used to be a destination. It is now a citation. The first reader of your pricing is a language model, not a buyer. The buyer's first impression of your pricing is whatever the AI summarized for them — which is whatever your page made extractable.
This is a structural change in how pricing pages need to be built. Most teams treat their pricing page as a presentation. The pricing pages that win in 2026 treat it as a structured data source that happens to also be visually presentable to humans. The order of priority has reversed.
Test the AI's read of your pricing page
Open ChatGPT and ask: "Summarize the pricing on [your pricing page URL] in two sentences." If the answer omits a tier, gets a price wrong, or substitutes vague language ("scales with usage") for a specific number, your pricing page needs the structural changes above. Run your pricing page through roast.page for a full extraction audit including the schema, structure, and what an AI buyer would actually read.
The Two Hours That Pay for Themselves
If you've been putting off the pricing page rewrite because it feels lower priority than the homepage or product pages, the math has changed. The pricing page is now the most-cited page on most B2B SaaS sites — second to the homepage in human traffic, but often first in AI citations because the AI follow-up question is so reliable.
The rewrite is small. Two hours of focused work, structured per the elements above, on a page you've already written. Add the schema markup. Pin the one-sentence summary. Add "best for" microcopy to each tier. Surface the upgrade-trigger logic. Run the AI test, fix what it surfaces, run it again.
The lift shows up in two places at once: in the AI citations you start earning when buyers ask the pricing question, and in the conversion rate of the buyers who do click through and find a page they can actually trust. Pricing transparency, written for both readers, is the fastest-paying-back content investment in your portfolio right now. Most companies will get there. The first ones get the citation share.