The Problem Nobody in AI Marketing Wants to Admit
I asked ChatGPT to write a landing page headline for a project management tool. It said "Streamline Your Workflow, Empower Your Team." Then I asked Claude. It said "Simplify Project Management for Growing Teams." Both are terrible for the same reason: they could describe any product on earth.
Not just any project management tool. Any product. Swap "project management" for "email marketing" or "HR onboarding" or "inventory tracking" and those headlines still work. Which means they don't work at all. A headline that fits everything describes nothing.
I've been using AI to write and edit landing page copy for over a year now. I've shipped pages where AI did 80% of the work. I've also thrown away entire AI drafts and started from scratch at 11pm, furious at myself for trusting the output without editing it. Both of those experiences taught me something. The gap between bad AI copy and good AI copy isn't about which model you use. It's about how you use it — and more importantly, what you do with the output after you get it.
Here's the honest version of what I've learned, including the parts where AI genuinely saves me hours and the parts where it will confidently hand you polished garbage.
Why Default AI Copy All Sounds the Same
To fix the problem, you have to understand why it exists. And it exists for a very specific, structural reason that has nothing to do with AI being "dumb."
Large language models were trained on the internet. A huge portion of the internet's marketing content is mediocre. Not terrible — mediocre. It's the vast middle: competent enough to publish, generic enough to forget. Millions of SaaS landing pages that say "streamline your workflow." Millions of e-commerce pages that say "elevate your style." Millions of agency pages that say "we drive results."
When you ask an LLM to "write a landing page headline," it does exactly what it was trained to do: it produces the statistical center of all landing page headlines. The average. The safe middle. And the safe middle of marketing copy is, by definition, the stuff nobody remembers.
This isn't a bug. It's how the technology works. The model is doing its job perfectly — you're just giving it the wrong job.
We tested this.
We ran 200 landing pages through AI generation using default prompts — "Write a landing page for [product description]" — and then analyzed the output through roast.page. The results were consistent and damning:
- 74% of AI-generated headlines contained at least one of these phrases: "streamline," "empower," "unlock," "revolutionize," or "seamless"
- 91% of generated pages scored below 5/10 on Copy & Messaging specificity
- 68% produced feature-first copy rather than benefit-first copy
- The average AI-generated page scored 4.2/10 on Copy & Messaging — lower than the human-written average of 5.1/10
- But — and this is critical — AI pages that used the prompting techniques below averaged 6.8/10, beating the human average by 33%
That last data point is the whole story. Default AI copy is worse than average human copy. But guided AI copy — where you constrain, direct, and edit the output — is significantly better than what most humans write unaided. The tool isn't the problem. The prompt is the problem. And the absence of editing is the bigger problem.
Here are the specific patterns we saw AI default to, over and over:
- The Thesaurus Habit: AI uses sophisticated-sounding synonyms where plain words work better. "Leverage" instead of "use." "Facilitate" instead of "help." "Utilize" instead of... "use" again.
- The Hedge: AI hedges constantly. "Help you potentially improve your workflow" instead of "Fix your workflow." It's trained to be careful, and careful copy doesn't convert.
- The Feature Dump: Ask for a landing page section and you'll get a bulleted list of features. Not benefits, not outcomes — features. Because that's what most training data pages contain.
- The Empty Superlative: "Best-in-class," "cutting-edge," "world-class," "industry-leading." AI scatters these like confetti because they appear on millions of pages. They mean nothing. We've written about this problem in depth.
Knowing these defaults exist is step one. Step two is learning how to override them.
The 6 Prompting Fixes That Actually Work
I've tested dozens of prompting strategies. Most of them produce marginal improvements. These six produce dramatic ones. They work in ChatGPT, Claude, Gemini — the model matters less than the technique. If you want purpose-built prompts for this, we've put together ChatGPT-specific and Claude-specific landing page prompt templates that bake these principles in.
1. Give It Competitors (Context Creates Contrast)
The single most effective prompting technique I've found. When you tell the AI who your competitors are and what they say, the output immediately gets more specific — because the model now has something to differentiate against.
DEFAULT PROMPT
"Write a headline for a project management tool for remote teams."
Output: "Streamline Remote Collaboration for Your Team"
WITH COMPETITORS
"Write a headline for a project management tool for remote teams. Our competitors are Asana (enterprise, complex) and Linear (dev-focused, minimal). We're for non-technical teams under 30 people who find Asana overwhelming."
Output: "Project management for small teams who don't need enterprise software"
See the difference? The second headline has a point of view. It implies a tradeoff. It says "we're this, not that." Which is exactly what makes a headline memorable — it excludes something. The top 10% of headlines we've analyzed all share this quality: they make a claim specific enough that it wouldn't work for a competitor.
2. Ban the Buzzwords Explicitly
This sounds almost comically simple, but it's remarkably effective. Include a line in your prompt that says: "Do not use any of the following words: streamline, empower, unlock, leverage, seamless, cutting-edge, revolutionize, best-in-class, world-class, elevate, supercharge, turbocharge."
I keep a running banned list that I paste into every prompt. It forces the model off its default tracks. When the easy, generic word is forbidden, the AI has to think harder — and "thinking harder" in LLM terms means pulling from less common, more specific training examples. The output gets concrete because the lazy option is gone.
Add your industry's specific buzzwords too. In SaaS, ban "solution," "platform," and "ecosystem." In e-commerce, ban "curated" and "elevated." In fintech, ban "reimagine." You know the words I'm talking about. The ones that make you physically cringe when you read them on someone else's page but somehow sound fine when they're on yours.
3. Constrain the Format
AI writes better when you give it structural constraints. "Write a headline" produces mush. "Write a headline under 8 words that starts with a verb and contains a specific number" produces something you can actually use.
Constraints I use regularly:
- "Maximum 10 words."
- "Must include a specific number or timeframe."
- "Write it at a 6th grade reading level."
- "No sentences longer than 12 words."
- "Use the same sentence structure as: [example headline I like]."
The reading level constraint is especially powerful. Our data on 1,000 landing pages shows a clear correlation between lower reading complexity and higher conversion scores. When you force AI to write simply, it drops the jargon and latinate vocabulary and produces copy that sounds like a human talking, not a whitepaper.
4. Feed It Real Customer Language
This is the cheat code. Go to your customer interviews, support tickets, G2 reviews, or Reddit threads where people discuss the problem you solve. Copy-paste actual quotes into your prompt.
"Here are five things real customers said about why they switched to us. Write landing page copy that uses their language, not marketing language:"
When you feed AI real voice-of-customer data, the output shifts dramatically. Instead of "Optimize your team's productivity with intelligent automation," you get something like "Stop spending your Mondays updating spreadsheets nobody reads." Because that second one is how humans actually describe their frustrations.
Pro tip:
The best source of customer language isn't what customers say about your product. It's what they say about the problem before they found you. The complaints, the workarounds, the frustrations. That language is raw and emotional and specific — exactly what landing page copy needs to be. Our landing page copy prompts include specific frameworks for incorporating voice-of-customer data.
5. Ask for Options, Not THE Answer
Never ask AI for one headline. Ask for ten. Or better: ask for five in different styles.
"Give me 5 headlines in 5 different styles: (1) outcome-focused, (2) problem-agitation, (3) social proof, (4) direct/minimal, (5) counterintuitive." Then pick the strongest direction and ask for five variations of that one. Then pick the best variation and refine it manually.
This works because you're using AI for what it's actually good at — rapid generation and variation — instead of what it's bad at, which is judging quality. You are the quality filter. The AI is the idea machine. The more raw material it gives you, the more likely one direction will spark something real. I use this approach with hero section headline prompts and it consistently produces better starting points than asking for a single "best" headline.
6. Tell It What to Cut
Here's a technique most people never try: give the AI your existing draft and ask it to cut, not add.
"Here's my landing page copy. Cut 40% of the words while keeping every concrete claim and specific number. Remove all filler, hedging language, and sentences that could apply to any company in my category."
AI is surprisingly excellent at compression. It can identify redundancy and filler with mechanical precision because pattern-matching is exactly what it does. I've seen AI cut a 400-word section to 240 words and make it noticeably sharper — because the 160 words it removed were all connective tissue and throat-clearing that humans include out of habit.
BEFORE: 67 WORDS
"Our comprehensive platform brings together all the tools your team needs to manage projects efficiently. With powerful features like real-time collaboration, automated workflows, and intelligent reporting, you can streamline your processes and boost productivity. Whether you're a small startup or a large enterprise, our solution adapts to your unique needs and helps you deliver projects on time, every time."
AFTER AI CUT: 28 WORDS
"Manage projects in one place. Your team edits together in real time, workflows run automatically, and you see exactly which projects are at risk — before they're late."
The after version is less than half the words and says more. It replaced "comprehensive platform" with a concrete picture. It cut "whether you're a small startup or a large enterprise" because that's a nothing sentence. And it turned "intelligent reporting" into a specific scenario: seeing which projects are at risk before they're late. That's what AI editing looks like when you direct it properly.
The 3-Pass Editing Workflow
Here's the part that separates the people who use AI well from the people who publish AI output and wonder why it sounds flat. AI gives you draft one. It should never give you the final draft. Every AI-generated page needs at least three editing passes, and I'm not talking about proofreading.
Pass 1: The Specificity Sweep
Read every sentence and ask: "Could a competitor say this?" If yes, it's not specific enough. This is the five-second test applied to individual sentences.
Go through the entire page and highlight everything that could appear on a competitor's site unchanged. Then rewrite each highlighted sentence with a specific detail that only applies to your product. A number, a named feature, a workflow, a customer type, a timeframe — anything that anchors the claim to your reality.
"We help teams communicate better" becomes "Your design feedback lives next to the mockup, not buried in a Slack thread from last Tuesday."
This pass typically changes 40-60% of AI-generated copy. Which sounds like a lot of work — and it is. But it's faster than writing from scratch, and the AI draft gives you a solid structural skeleton that you're adding muscle to, not building from nothing.
Pass 2: The Voice Match
Read the copy out loud. Does it sound like your brand? Or does it sound like a marketing textbook? AI has a "voice" — slightly formal, relentlessly positive, allergic to short sentences and sentence fragments. Real brands have personality. They have opinions. They say things a PR team would flag.
Your voice check questions:
- Would I say this sentence to a customer over coffee? If not, rewrite it.
- Is there a single sentence that surprises me? If nothing surprises you, the copy is too safe.
- Are there any opinions on this page? Copy without opinions is wallpaper.
- Does every paragraph sound the same? Vary your rhythm. Short sentences. Then a longer one that builds the thought out over a full clause. Then a fragment. Like this.
The voice pass is where you earn the right to say "we wrote this" instead of "AI wrote this." It's the human layer that makes copy feel like it came from someone who actually cares about the product, not someone who was assigned a task.
Pass 3: The Friction Audit
This is the pass most people skip, and it's the one that directly affects conversion. Read the page as a skeptical first-time visitor. At every claim, ask: "Do I believe this? What would make me believe this?"
AI is constitutionally incapable of creating friction-free copy on its own because it doesn't know what your visitors doubt. It doesn't know which claims feel too good to be true. It doesn't know that your target audience has been burned by three competitors who made identical promises.
For each section, ask:
- Is there proof for this claim? Add a number, a case study, a testimonial, a screenshot.
- Does this create an objection? Address it immediately, in the next sentence.
- Is the CTA earning its click? Every button should feel like the obvious next step, not a sales trap. CTA psychology research shows that reducing perceived risk at the button level increases clicks significantly.
Run the finished copy through the website copy analyzer or the CTA analyzer to catch anything your editing passes missed. Automated analysis is especially good at catching inconsistencies between your headline promise and your body copy — a mismatch that AI creates more often than humans do, because it doesn't hold a coherent argument in mind the way a human writer does.
What AI Is Actually Good At (and Where Humans Still Win)
I want to be honest about this because most AI-and-copywriting content falls into one of two camps: breathless hype ("AI will replace copywriters!") or defensive dismissal ("AI can't write real copy!"). Both are wrong. The reality is boringly specific.
Where AI genuinely saves time
- Generating options: AI can produce 20 headline variations in 30 seconds. Humans take a day. Volume matters because the first idea is rarely the best, and AI lets you explore the possibility space faster than any brainstorm session.
- Restructuring existing copy: "Take this 500-word section and reorganize it as: problem, solution, proof, CTA." AI does this well because it's pattern transformation, not creation.
- Adapting tone: "Rewrite this for a developer audience" or "Make this more conversational." Tone shifting is one of AI's strongest capabilities because it has millions of examples of each register.
- First drafts for known formats: FAQ sections, feature comparison tables, meta descriptions, alt text. Structured, formulaic content where the format is known and the quality bar is "clear and accurate."
- Compression: As I mentioned above, AI is excellent at cutting. Better than most humans, honestly, because it doesn't get emotionally attached to sentences.
Where humans are still irreplaceable
- Brand voice: AI can mimic a voice given examples. It cannot create a distinctive voice from nothing. The brands with copy people remember — Stripe, Vercel, Linear, Basecamp — have voices that were created by opinionated humans, not generated by models.
- Emotional resonance: AI can identify emotional triggers. It cannot feel them. The difference shows up in copy that technically addresses a pain point ("Managing invoices is frustrating") versus copy that embodies it ("It's 9pm on a Friday and you're manually cross-referencing invoices instead of being at dinner with your family"). The second version comes from lived experience.
- Knowing what to omit: AI includes everything. Experienced copywriters know that what you don't say is as important as what you say. The strategic omission — leaving out a feature that's table stakes, skipping the "about us" section, not mentioning the competitor everyone's thinking about — that's judgment. AI doesn't have it.
- Reading the room: AI doesn't know that your audience just had a bad experience with a competitor's price hike, or that there's a meme going around your industry's Slack communities that you could reference. Context is human.
The right mental model: AI is a talented junior writer who works at superhuman speed, has no taste, and will confidently write mediocre copy unless you direct it with extreme specificity. You're the editor-in-chief. Act like one.
The Copy-Analyze-Iterate Loop
Here's the workflow I use for every landing page I write now. It combines AI generation with structured analysis, and it consistently produces better results than either pure AI or pure human writing.
Step 1: Generate with AI. Use the prompting techniques above. Get your first draft. Don't judge it yet — just get it on the page. Use our landing page copy prompts or CTA optimization prompts for structured starting points.
Step 2: Run the 3-pass edit. Specificity, voice, friction. This takes 30-60 minutes for a full page. It's the hardest part, and it's the part that makes the output publishable.
Step 3: Analyze with roast.page. Put the edited copy on your page and run it through the analyzer. Pay attention to Copy & Messaging, but also look at the First Impression and Trust & Credibility dimensions — they'll tell you if your copy makes promises your page doesn't visually support.
Step 4: Iterate on weak dimensions. If the Copy & Messaging score is below 7, go back and sharpen specificity. If Trust is below 7, you need proof — testimonials, numbers, case studies. If the headline analysis flags issues, rework your hero. Each dimension gives you a specific direction for the next draft.
Step 5: Feed the analysis back to AI. This is the move most people miss. Take the roast.page feedback and paste it into your AI conversation: "Here's the analysis of my current copy. The weakest area is [X]. Rewrite the [specific section] to address this feedback while keeping everything I liked about the current version." Now the AI isn't generating from a vague prompt — it's iterating on specific, structured feedback.
Step 6: Repeat steps 2-5. Two or three loops usually gets you to a page that scores above 7 across all dimensions. The first loop is the biggest jump. Each subsequent loop is smaller but still measurable.
Why this loop works:
Most people use AI as a one-shot tool: prompt, generate, publish. That's like getting a first draft from a writer and printing it without editing. The loop works because it uses AI for what it's best at (rapid generation and variation) and structured analysis for what it's best at (objective evaluation against known criteria). The human in the middle — you — provides the taste, the voice, and the judgment about what matters. That combination beats any individual approach.
Stop Trying to Hide the Process. Start Owning the Output.
I see two kinds of reactions when people start using AI for copy. The first: shame. They use AI, feel guilty, and try to hide it. They over-edit to remove all traces, sometimes making the copy worse in the process. The second: laziness. They use AI, don't edit, and publish whatever comes out because "the AI is good enough."
Both are wrong. The right approach is neither hiding AI nor defaulting to it. It's treating AI as the most productive brainstorming partner you've ever had — one that works at machine speed but needs your judgment to produce anything worth reading.
The pages that convert aren't the ones that sound the most human or the most AI. They're the ones that are the most specific, the most clear, and the most honest about what the product does and who it's for. Those qualities come from the editing, not the generation.
So here's my actual advice, distilled:
- Use AI for the first draft. Always. It's faster and gives you better raw material than staring at a blinking cursor.
- Use the six prompting fixes to get that first draft from a 4/10 to a 6/10 before you start editing.
- Run the 3-pass edit to get from a 6/10 to an 8/10. This is where the real work happens.
- Use roast.page to validate what you can't see yourself, and feed the feedback back into the loop.
- Ship it. Then test it. Then iterate again. Because the best landing page copy isn't the one that sounds perfect — it's the one that converts.
Your visitors don't care whether AI helped write your page. They care whether your page helps them make a decision. Focus on that, and the tools become irrelevant. The copy is what matters.
Analyze your landing page to see where your copy stands today. Then use the ChatGPT landing page prompts or Claude landing page prompts to start your next draft with the right constraints built in.