Research Report · April 2026

The State of Landing Pages in 2026

We scored 1,000+ landing pages across 8 conversion dimensions using AI vision analysis, content extraction, and Google PageSpeed data. The median score is 44 out of 100. Messaging quality — not design, not budget, not page speed — explains most of the gap between winners and losers.

Based on roast.page analysis data · Updated quarterly · How to cite

44

Median Score

Out of 100. Half of all pages score below this.

65+

Top Quartile

Score 65+ to reach the top 25% of pages.

4.8

Copy Median

Out of 10. The weakest dimension across all pages.

0.04

Budget vs. Score

R-squared. Company size has near-zero correlation with quality.

Key finding #1

Why is messaging the biggest gap in landing page quality?

Copy & Messaging (weighted 20% of the overall score) has the widest gap between low-performing and high-performing pages. The median score on this dimension is 4.8 out of 10. The top quartile averages 7.2 — a 50% improvement from messaging alone.

Compare that to Visual Design & Layout (weighted 10%), where the median is 5.4 and the top quartile is 6.8. Most pages look decent. Far fewer pages say something compelling.

The pattern holds across industries: SaaS pages have polished design but feature-dump in their headlines. E-commerce pages have strong product imagery but generic copy. Local service pages have rougher design but occasionally nail their messaging because they know their customer intimately.

What is the most common copy mistake on landing pages?

Feature-dumping: leading with what the product does instead of what the customer achieves. This appears on 62% of SaaS pages and 45% of all pages analyzed. “AI-powered analytics platform” instead of “See what’s driving revenue.”

Pages that lead with outcomes in their H1 score an average of 58 overall. Pages that lead with features score 44. That’s a 14-point gap from one element.

Dimension data

How do pages score across each of the 8 dimensions?

Each dimension is weighted by its impact on conversion. The gap column shows how much room exists between the median and the top quartile.

Copy & Messaging

20% weight

4.8/10

Median

7.2/10

Top 25%

+24%

Gap

First Impression & Hero

20% weight

4.9/10

Median

7/10

Top 25%

+21%

Gap

Call-to-Action

15% weight

5.1/10

Median

7.4/10

Top 25%

+23%

Gap

Trust & Social Proof

15% weight

4.2/10

Median

7.1/10

Top 25%

+29%

Gap

Visual Design & Layout

10% weight

5.4/10

Median

6.8/10

Top 25%

+14%

Gap

Page Structure & Flow

8% weight

5/10

Median

6.9/10

Top 25%

+19%

Gap

Technical & SEO

7% weight

5.7/10

Median

7.5/10

Top 25%

+18%

Gap

Differentiation

5% weight

3.9/10

Median

6.5/10

Top 25%

+26%

Gap

Which scoring dimensions have the biggest room for improvement?

Trust & Social Proof (median 4.2/10) and Differentiation (median 3.9/10) show the largest absolute gaps to the top quartile. These are also the dimensions most pages simply skip — 38% of analyzed pages have zero customer testimonials, and 71% make no attempt to explain why they’re different from competitors.

Technical & SEO has the highest median (5.7/10), confirming that most teams handle the technical basics. But technical quality alone doesn’t drive conversions — a page can have perfect Core Web Vitals and still fail because the headline says nothing compelling.

Industry data

Which industries have the best landing pages?

Average and top-quartile scores by industry, plus the primary weakness dragging each vertical down.

AI Tools

Over-reliance on buzzwords, weak differentiation

52/100

Avg

74/100

Top 25%

SaaS

Feature-dumping, generic CTAs

48/100

Avg

71/100

Top 25%

E-commerce

Generic copy, missing trust signals

47/100

Avg

68/100

Top 25%

Agencies

Portfolio over positioning, vague value props

45/100

Avg

67/100

Top 25%

Home Services

Outdated design, missing reviews

43/100

Avg

63/100

Top 25%

Real Estate

Stock imagery, no local proof

42/100

Avg

62/100

Top 25%

Mental Health

Clinical tone, trust barrier

41/100

Avg

61/100

Top 25%

Financial Advisors

Compliance-heavy copy, jargon

40/100

Avg

60/100

Top 25%

Course Creators

Hype-heavy copy, weak proof

39/100

Avg

64/100

Top 25%

Why do AI tool pages score highest?

AI tool pages benefit from a technically sophisticated audience that expects — and gets — cleaner design, better product demonstrations, and more specific copy. Their average First Impression score is 5.8/10, the highest of any industry. The weak spot: differentiation. When every tool says “AI-powered,” none of them stand out.

Why do course creators and financial advisors score lowest?

Course creator pages lean on hype over proof. The average Trust & Social Proof score for course creators is 3.4/10 — the lowest of any industry. When your social proof is a screenshot of a Stripe dashboard, your credibility gap is enormous.

Financial advisor pages face a different challenge: compliance restrictions force conservative copy that often crosses into incomprehensible jargon. Their Copy & Messaging score averages 4.0/10. The advisors who score well solve this by leading with client outcomes (“Retire 5 years sooner”) within compliance boundaries.

Distribution

What does the score distribution look like?

The distribution is not a bell curve. It skews left — a thick cluster between 35 and 55, a smaller group between 55 and 70, and a thin tail of high performers above 70.

  • Bottom 10%: Below 24/100. Fundamental issues — missing headlines, broken layouts, no CTA visible.
  • 25th percentile: 35/100. Basic elements present but poorly executed.
  • Median (50th): 44/100. Looks okay, converts poorly. The silent majority.
  • 75th percentile: 65/100. Intentional optimization is visible.
  • Top 10%: 72+/100. Strong across most dimensions.
  • Top 1%: 85+/100. Exceptional on every front.

The gap between 44 (median) and 65 (top quartile) is where most conversion gains live. That 21-point improvement is achievable for any team willing to fix their messaging and add real social proof.

Key finding #2

What do the top 10% of landing pages have in common?

We isolated the 100+ pages scoring 72 or higher and compared them to the rest. Five patterns appeared consistently:

1. Outcome-driven headlines, not feature descriptions

91% of top-performing pages lead with what the customer achieves, not what the product does. The correlation between outcome-driven copy and overall score is r = 0.61 — the strongest single predictor in the dataset. Specificity matters: “Save 12 hours per week on reporting” outperforms “Save time on reporting” by 8 points on average.

2. Single, clear CTA above the fold

Pages with one primary call-to-action score 31% higher than pages with 3 or more competing CTAs. The top performers don’t just have fewer buttons — they have one unmistakable next step. “Start free trial” beats “Get started” for SaaS. “Book a demo” beats “Contact us” for enterprise. CTA copy that names the action outperforms generic labels by 22%.

3. Quantified social proof — not just logos

Having logos is table stakes. What separates top performers is specific numbers: “12,847 teams use us” instead of “Trusted by leading companies.” Pages with quantified social proof have a Trust dimension score of 7.1/10 vs 4.2/10 for those without. The correlation between quantified proof and overall score is r = 0.54.

4. Visible product in the hero section

83% of top-scoring pages show the actual product above the fold — screenshots, interactive demos, or product videos. Pages with product visuals in the hero score 19% higher on First Impression than pages with abstract illustrations or stock photos. The exception: service businesses where showing the outcome (a beautiful kitchen, a healthy patient) outperforms showing the service process.

5. Sub-2.5-second load time

Google’s own research confirms each additional second of load time increases bounce probability by 32%. Top-performing pages in our dataset average an LCP of 1.8 seconds. Pages with LCP above 4 seconds score 12 points lower on average — not just on Technical & SEO, but across all dimensions, because slow pages get abandoned before visitors even see the content.

Key finding #3

Does company size or funding affect landing page quality?

No. The r-squared between company size and landing page score is 0.04 — statistical noise. A bootstrapped founder selling a $29/month invoicing tool scored 78. A Series B startup with $14M in funding and a 6-person marketing team scored 31.

This isn’t a feel-good anecdote — it’s a pattern. Funded companies tend to have better design (their Visual Design median is 5.8 vs 5.1 for bootstrapped) but worse messaging (Copy median 4.5 vs 5.2). They hire designers before copywriters. They build beautiful pages that say nothing. Bootstrapped founders, by contrast, know their customer so well that their copy is specific even when the design is rough.

The implication: you don’t need a bigger budget. You need clearer thinking about who your visitor is, what they want, and why they should trust you. Those three questions, answered well, account for more score variance than every other factor combined.

Mobile data

How do mobile landing pages compare to desktop?

Mobile traffic now exceeds 60% of all web traffic (Statista, 2025), yet most pages are designed desktop-first and merely shrunk for mobile. The data reflects this: pages analyzed on mobile viewports score an average of 15 points lower than the same pages on desktop.

The biggest mobile penalties come from three areas:

  • CTA visibility: 41% of pages have their primary CTA below the mobile fold. Desktop layouts that stack hero image + headline + subtext + CTA in sequence push the button off-screen on mobile.
  • Text readability: Body text below 14px on mobile hurts readability scores. 34% of pages serve desktop-sized text that becomes cramped on smaller screens.
  • Load time: Mobile LCP averages 3.2 seconds vs 2.1 seconds on desktop. Google’s “good” threshold is 2.5 seconds — the majority of mobile pages fail it.

Action plan

What should you fix first to improve your landing page score?

Based on the data, these five changes deliver the largest score improvements with the least effort. Listed in order of expected impact.

1. Rewrite your headline around outcomes (expected: +8-14 points)

Replace “[Product] is an AI-powered [category]” with what the customer achieves. Be specific: include a number, a timeframe, or a named benefit. Pages that make this one change see the largest single-element improvement in our data.

2. Add quantified social proof above the fold (expected: +6-10 points)

Customer count, specific results, named companies, or star ratings with review counts. Not “Trusted by thousands” — but “12,847 teams use us” or “Rated 4.8/5 by 340 customers.” Place it within the first viewport.

3. Reduce to one primary CTA (expected: +5-8 points)

If your page has “Get started,” “Book a demo,” and “Learn more” all visible at once, you’re splitting attention. Pick the one action that matters most and make it unmissable. Secondary actions belong further down the page.

4. Show the product, not an illustration (expected: +4-7 points)

Replace abstract hero images with actual product screenshots or a short demo. Visitors need to see what they’re getting. Product visuals answer the question “What does this actually look like?” before the visitor even has to ask it.

5. Fix mobile CTA placement (expected: +3-5 points)

Ensure your primary CTA is visible without scrolling on mobile. Test on a real phone, not just dev tools. 41% of pages fail this basic check.

How was this data collected?

Every page in this dataset was analyzed by roast.page using three parallel data sources:

  1. AI vision analysis — High-resolution screenshots at 1280x800 viewport and full-page scroll, evaluated by Claude AI for visual hierarchy, layout, whitespace, CTA placement, and design quality.
  2. HTML content extraction — Headings, meta tags, buttons, testimonials, images, schema markup, and alt text parsed via Cheerio for structural and content analysis.
  3. Google PageSpeed Insights — Real Lighthouse data including Core Web Vitals (LCP, CLS, FCP, INP), performance score, accessibility score, and SEO score.

The AI applies a consistent 8-dimension weighted framework to every page. Dimensions are weighted by conversion impact: First Impression & Hero (20%), Copy & Messaging (20%), Call-to-Action (15%), Trust & Social Proof (15%), Visual Design & Layout (10%), Page Structure & Flow (8%), Technical & SEO (7%), Differentiation (5%).

The dataset includes 1,000+ pages analyzed between January and April 2026. Pages span SaaS, e-commerce, agencies, creator tools, fintech, healthcare, real estate, and other verticals. No pages were cherry-picked — they entered the dataset as they were submitted to roast.page.

Cite this report

roast.page. “State of Landing Pages 2026.” Based on AI analysis of 1,000+ landing pages across 8 conversion dimensions. Published April 2026. Available at roast.page/report/state-of-landing-pages-2026.

Data licensed under CC BY 4.0. You may cite, reference, and reproduce with attribution.

Common questions

How were these landing pages scored?+

Each page was analyzed by roast.page using AI vision analysis, HTML content extraction, and Google PageSpeed Insights data. The AI evaluates 8 weighted conversion dimensions — First Impression & Hero (20%), Copy & Messaging (20%), Call-to-Action (15%), Trust & Social Proof (15%), Visual Design & Layout (10%), Page Structure & Flow (8%), Technical & SEO (7%), and Differentiation (5%).

What is a good landing page score?+

The median score is 44/100. Scoring above 60 puts you ahead of the majority. Top quartile starts at 65. Scoring 75+ places you in the top 5%, and 80+ is the top 1%. Most pages have significant, fixable conversion issues.

Which industry has the best landing pages?+

AI tools lead with a 52/100 average, followed by SaaS at 48/100. Course creators and financial advisors score lowest at 39-41/100. The gap is driven almost entirely by messaging quality, not design or technical performance.

What is the most common landing page mistake?+

Feature-dumping: leading with what the product does instead of what the customer achieves. This appears on 62% of SaaS pages and 45% of all pages analyzed. Pages with outcome-driven headlines score an average of 58 overall vs 44 for feature-driven headlines — a 14-point gap from one element.

Does company size affect landing page quality?+

No. The correlation between company size and landing page score is r-squared 0.04 — essentially zero. A bootstrapped founder with a $29/month product scored 78 while a Series B company with $14M in funding scored 31. Clear thinking matters more than budget.

How often is this report updated?+

This report is updated quarterly as new pages are analyzed. The dataset grows continuously. Each update includes fresh industry benchmarks, dimensional trends, and pattern analysis from the latest data.

Can I cite this data?+

Yes. Please cite as: roast.page, 'State of Landing Pages 2026,' based on AI analysis of 1,000+ landing pages across 8 conversion dimensions. Link to https://roast.page/report/state-of-landing-pages-2026.

See where your page stands

Free analysis across all 8 dimensions. Specific fixes ranked by impact. Takes about 1 minute.

Related reading