Cloudflare moved pay-per-crawl from beta to general availability in April 2026. The feature is conceptually simple and politically loaded: any site behind Cloudflare can now identify AI crawlers (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, Google-Extended, ByteSpider, etc.) and respond to them with HTTP 402 — "Payment Required" — instead of serving the page. Cloudflare brokers the payment between the crawler and the site, taking a cut. Customizable 402 messages, per-crawler pricing, allowlist exceptions, all in the dashboard.
The question every founder, marketer, and small-publisher owner has had since the announcement: should I turn this on?
The reflex answer floating around marketing Twitter is "yes, why would you let them have it for free?" The reflex answer floating around AEO Twitter is "no, it's a death sentence for your AI visibility." Both reflexes are wrong as a default. The right answer depends on what your site actually is, what AI engines are actually doing with it, and what the trade-off looks like in real revenue terms.
I spent the last two weeks running the math for the customer profiles I see most often at roast.page — solo SaaS founders, indie product makers, content marketing sites, mid-market B2B landing pages — and I'm going to walk you through the actual decision framework. By the end of this you'll know whether to flip the switch on, and what to do if you don't.
What Pay-Per-Crawl Actually Pays
The first thing to internalize: pay-per-crawl revenue is not "make money from AI." It's fractions of a cent per crawl, paid by the AI companies whose crawlers respect 402 and choose to pay rather than skip your content.
From the visibility I have into the early GA pricing data, typical per-crawl payments cluster in the $0.0008 – $0.005 range. The high end is reserved for sites with proprietary data the AI engines actively want (financial data, proprietary research, niche professional content). The low end is what most marketing pages and blog content earn.
Multiply that by your monthly crawl volume. A typical mid-traffic SaaS landing page gets crawled by major AI bots somewhere in the range of 500–5,000 times per month across all engines. At $0.002 per crawl, that's $1–$10 per month in pay-per-crawl revenue. Even for high-traffic content sites with 100K+ crawls per month, the topline pay-per-crawl revenue is typically in the $200–$2,000/month range — meaningful but not transformative.
That's the upside. Now let's look at what you give up.
What You Give Up When You Charge
The trade-off is simple: AI crawlers that hit a 402 mostly don't pay — they skip. The crawler logs go quiet. The downstream effect is that your content stops appearing in AI Overviews, ChatGPT answers, Perplexity citations, and Gemini summaries on the queries where it would have been cited.
How much traffic does that cost you? Here's the data we have, and where the gaps are.
SEER Interactive's 2026 update (published February) measured CTR and conversion behavior across 53 brands and 5.47M queries with AI Overviews present. Cited sites get 35% more clicks than non-cited top-10 organic results in the same SERP, and convert at 5x the rate. That second number is the one that should hurt to give up. Your AI-cited traffic is small in volume but it's the highest-converting traffic you have, because it's pre-qualified by the AI engine ("here's the answer to your buying question").
For a typical SaaS site getting 500 monthly AI-driven visitors at a 5% conversion rate, that's 25 conversions/month from AI traffic. At a $50 LTV, that's $1,250/month. The pay-per-crawl revenue you'd earn by blocking ($1–$10/month for the same site) is two to three orders of magnitude smaller than the conversion revenue you'd lose.
The math gets worse for high-conversion-value sites and better for low-conversion-value sites, but for almost any site where AI traffic actually converts to paid users or customers, the pay-per-crawl revenue does not approach the lost conversion revenue.
The Decision Matrix
The honest decision framework has three primary inputs and one tie-breaker. Run through these for your specific site:
Input 1: Is your content substantively reproduced or merely referenced? Open ChatGPT, Claude, Perplexity, and Gemini. Run 10 queries that should surface your content. For each citation of your site, look at how the AI engine uses your content. Two patterns to distinguish:
Reference pattern: The AI mentions your brand or product, summarizes a sentence or two, and links to your page for more depth. Users still need to click through to actually use your product or read your content. This is the pattern for most SaaS landing pages and product content.
Reproduction pattern: The AI extracts the bulk of the substantive content from your page — the dataset, the recipe, the in-depth tutorial, the proprietary insight — and presents it directly in the answer. Users get what they need without clicking. This is the pattern that's killing high-content publisher traffic.
If your content is reference-pattern, allowing free crawls is almost certainly correct — the citations send you converting traffic. If your content is reproduction-pattern, the trade-off shifts toward charging or blocking.
Input 2: Are you measuring cannibalization? The single most common mistake in this debate is asserting cannibalization without data. Set up a simple measurement: track the conversion-relevant pages on your site (your "money pages") and watch their organic traffic, organic CTR in Google Search Console, and AI-referral traffic over a 90-day window. If you see organic CTR collapsing on AI-Overview-eligible queries AND no offsetting AI referral traffic gain, you have evidence of cannibalization.
If you see organic CTR holding steady and AI referral traffic growing, you don't have cannibalization — you have visibility expansion, which is the outcome you want to preserve. Don't block what's helping you.
Input 3: What's your content's transactional value? Content with high direct transactional value (proprietary research nobody else has, paywalled-equivalent depth, exclusive datasets, original investigative journalism) earns higher per-crawl rates and has more leverage in the negotiation. Content that's commoditized (general explainers, broadly available how-tos, standard product information) earns the floor rate and has no leverage.
If you're in the high-transactional-value bucket and you can prove cannibalization, charging makes sense. If you're in the commoditized bucket, charging just removes you from the AI consideration set without earning meaningful revenue.
Tie-breaker: What's your distribution dependency on AI? If 40%+ of your top-of-funnel discovery is now AI-mediated (which is true for many newer products that grew up in the AI search era), blocking AI crawlers is a near-existential bet. The traffic you lose isn't replaced by anything in your current channel mix. If AI-mediated traffic is under 10% of your discovery, the bet is much smaller and the worst case is recoverable.
The Three Scenarios Where Charging Makes Sense
For clarity, here are the three concrete patterns where I'd actively recommend charging or selectively blocking:
1. High-traffic publishers with reproduction-pattern content. If you're a content publisher (news, deep-research, recipes, in-depth tutorials, datasets) and AI engines are clearly extracting and serving the substance of your content rather than referring users to it, you're losing traffic without compensation. Pay-per-crawl is the right defensive move here. Consider tiered approaches: allow free crawling for headline/teaser content, charge for full article access. Some publishers are wiring this with selective Cloudflare rules — robots can crawl /teaser/* free but hit 402 on /full/*.
2. Proprietary data sites where the data IS the value. If you publish original financial data, proprietary research datasets, or specialized professional content (legal precedent databases, scientific paper indexes, etc.), AI engines are extracting your unique value. Charging fits because the value transfer is one-directional. Set per-crawler pricing based on what each AI company can pay, accept the visibility loss as deliberate.
3. Sites where you've measured material cannibalization. If you have 90 days of clean data showing organic CTR collapse on AI-Overview-eligible queries with no offsetting AI referral lift, you have a defensible cannibalization case. Charge selectively on the cannibalized URL patterns, not site-wide.
The Two Mistakes I See Most Often
Mistake 1: Blocking AI crawlers because of training data ideology, then complaining about lost visibility. Some teams block AI crawlers because they object to AI training on their content as a matter of principle, then a year later are surprised that they've disappeared from AI search. Both can be true (the principle is legitimate AND the visibility cost is real), but you have to acknowledge both sides of the trade. If you block, accept that you're trading visibility for principle. Don't expect to have both.
Mistake 2: Turning on pay-per-crawl by default because Cloudflare made it easy. The dashboard switch is suspiciously easy to flip. The default state of "should I monetize this?" feels like an obvious yes. But for 90% of SaaS landing pages and small marketing sites, the per-crawl revenue is rounding-error and the lost visibility is real. Don't flip the switch because flipping is easy. Flip it (or don't) based on the math for your specific site.
What to Do If You Allow Free Crawling
If you land on "allow free" — which is the right answer for most readers of this post — the corollary is that you should be aggressively optimizing for the citations you're allowing AI engines to extract. Citation traffic isn't going to convert at 5x organic if you don't make it easy for AI engines to extract the right content.
The high-leverage moves are familiar but worth recapping in this context:
Make sure your robots.txt explicitly allows the AI crawlers you want (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, Google-Extended, Amazonbot). Don't rely on the default — third-party tools often add restrictive defaults you didn't intend. Add a /llms.txt file at your root (Cloudflare can auto-generate one). Add FAQPage schema to any Q&A content. Add Organization, Product, and SoftwareApplication schema as appropriate. Lead your hero copy with a citable claim — something specific and verifiable in the first 100 words. Your invisibility post walks through the full checklist.
The teams allowing free crawling AND optimizing for citation are the ones taking the lion's share of AI-driven conversion traffic in 2026. The teams charging are sometimes earning a small revenue stream and giving up a much larger one.
The Selective Strategy Most Teams Should Run
If you want a middle-path approach, here's the configuration that maximizes upside for a typical SaaS or product site:
Free crawling for: Your homepage, product pages, pricing page, comparison pages, blog content, documentation, FAQ pages. These are the pages AI engines should index and cite — they drive your conversion-relevant referral traffic.
402 (charge) for: Any proprietary research reports, data assets, or paywalled-equivalent content if you have any. These are the pages where the value transfer is one-directional and worth defending.
Block for: Internal admin pages, customer-data-touching pages, account portals (these should already be behind auth, but the explicit AI block is a belt-and-suspenders defense).
This selective configuration takes about 30 minutes in the Cloudflare dashboard once you've identified the URL patterns. It captures the visibility upside on your conversion-relevant content while defending the rare proprietary assets where charging actually matters.
One Honest Caveat
The pay-per-crawl revenue model is new enough that the per-crawl rates aren't stable. They could rise meaningfully if AI companies face increased legal pressure to pay for training data (the Penske lawsuit against Google over AI Overviews and the Britannica/Merriam-Webster suits against OpenAI in early 2026 are precedents). They could also fall as AI companies optimize their crawling to avoid pay-per-crawl-enabled sites entirely (substituting other sources where possible).
If pay-per-crawl rates rise to meaningful levels — say, $0.10+ per crawl for typical content — the math flips for many more sites. Watch the per-crawl rate trend, revisit the decision quarterly, and don't lock in a permanent posture based on early GA pricing.
For now, in May 2026, the math overwhelmingly favors free crawling for any site whose content drives downstream conversion or product trial. The small per-crawl revenue is not worth the disproportionately larger lost referral traffic. Optimize for citation, not extraction-revenue. If you're not sure where your site sits on the cannibalization vs visibility spectrum, our AI search visibility checker baselines your current AI footprint and flags whether you're being referenced (good) or substituted (worth a closer look).
Cloudflare gave you the lever. Whether to pull it depends on data, not vibes.