We've tested 47 AI optimisation tactics. 28 failed completely. Here's what we learned.
That's not a typo. More than half of our AI search experiments have failed. ChatGPT still doesn't recommend us for queries where we should dominate. Perplexity occasionally cites our competitors despite our superior content. Claude sometimes gets our services completely wrong.
And we're going to tell you exactly what failed, what worked, and what we still don't understand. Because the dirty secret of AI search optimisation is that nobody—not us, not your competitors, not even the companies building these systems—truly knows what consistently works.
Yet AI search users convert at 2-16x higher rates than Google searchers. They're C-suite executives asking buying questions. They skip the research phase entirely. One AI-generated lead can be worth 50 traditional ones.
So we keep testing. We keep failing. We keep learning. And if you're brave enough to experiment alongside us, you might just capture first-mover advantage in the most important shift in search since Google launched.
The uncomfortable truth about AI search in 2025
Let's start with brutal honesty about the current state of AI search:
Usage is tiny but elite: Less than 1% of total search volume goes through AI platforms. But that 1% includes CEOs asking "Who should handle our company's SEO?" and CMOs querying "Best approach for B2B authority building." Quality over quantity personified.
The rules change weekly: What worked for ChatGPT visibility in January might not work in February. These systems update constantly, without announcement, without documentation. It's like doing SEO if Google changed its algorithm daily and never told anyone.
Nobody can reverse-engineer it: With Google, we could analyse ranking factors, run correlation studies, test hypotheses systematically. With AI? It's a black box inside a black box. The engineers at OpenAI can't fully explain why ChatGPT recommends certain sources.
Results vary wildly: The same optimisation tactics that dramatically improved visibility for one client did absolutely nothing for another in the same industry. There's no playbook because the game has different rules for every player.
This is the reality. If an agency promises guaranteed AI search rankings, they're lying or delusional. We don't have answers. We have experiments and educated guesses.
What's actually working (sometimes)
From our testing, here's what has shown positive impact (with success rates):
Comprehensive FAQ sections (Success rate: 65%)
AI systems love question-and-answer formats. But not just any FAQ. They need:
- Natural language questions (how real people ask)
- Complete, standalone answers (no "click here to learn more")
- Multiple related questions that build context
- Recent updates (freshness matters more than for Google)
We've seen FAQ-rich pages get cited 3x more often than traditional content. But it's not universal. Some industries see no improvement.
Structured data and schema markup (Success rate: 45%)
This one surprised us. Schema shouldn't matter to AI systems that can understand context. Yet pages with comprehensive schema markup get cited more frequently. Our theory: AI systems may use schema as a quality signal, assuming well-structured sites are more authoritative.
But here's the weird part: only certain schema types seem to help. FAQ schema? Helpful. Organisation schema? Helpful. Product schema? No measurable impact.
Wikipedia-style comprehensive pages (Success rate: 70%)
Create the definitive resource on a topic. Not 500 words. Not 2,000 words. We're talking 5,000+ word ultimate guides that cover every angle, answer every question, address every objection.
AI systems seem to prefer citing one comprehensive source over multiple fragmented ones. Think less "blog post" and more "encyclopaedia entry."
Reddit and forum presence (Success rate: 40%)
This one's controversial, but AI systems definitely scan Reddit for authentic opinions. We've tested creating genuine, helpful responses on relevant subreddits (not spam, actual value). Sometimes it influences AI recommendations. Sometimes it doesn't.
The key seems to be consistency and genuine expertise. One comment won't move the needle. Becoming a recognised expert in relevant communities might.
Clear, declarative statements (Success rate: 55%)
AI systems prefer confident, clear statements over hedged language. Instead of "It might be beneficial to consider..." write "Companies should..." Instead of "Various factors could influence..." write "Three factors determine..."
But be careful. False confidence gets penalised. Make declarative statements only about things you can support with evidence.
What's definitely not working (yet)
These tactics have shown zero or negative impact in our tests:
Traditional link building (Success rate: 0%)
We tested this extensively. Built high-quality links to pages we wanted AI to cite. Zero correlation between link authority and AI citations. Domain authority doesn't seem to translate to AI visibility.
Keyword stuffing for AI (Success rate: 0%)
Some "experts" suggest stuffing content with phrases like "recommended by ChatGPT" or "AI search optimised." We tested it. It doesn't work. If anything, it might hurt credibility.
Meta descriptions and title tags (Success rate: 5%)
Traditional on-page SEO elements seem largely ignored by AI systems. They're reading your content, not your metadata. Time spent perfecting meta descriptions for AI is time wasted.
Publishing frequency (Success rate: 10%)
We tested whether publishing more content improved AI visibility. It doesn't. One comprehensive piece beats 10 shallow ones. AI systems seem to evaluate depth over volume.
Geographic targeting (Success rate: 0%)
Adding location modifiers, creating location pages, using local schema—none of it influences AI recommendations for local queries. AI systems seem to have their own understanding of geography that doesn't align with traditional local SEO.
The experiments that produced unexpected results
Some tests delivered surprises that we're still trying to understand:
The authority paradox
We optimised a client's site extensively for AI visibility. No improvement. Then they were featured in a major industry publication (not for SEO reasons). Suddenly, ChatGPT started recommending them.
The mention wasn't even a link. It was just a quote. But it seemed to trigger AI recognition. We've since seen this pattern repeat: external validation matters more than self-optimisation.
The recency mystery
For some queries, AI systems heavily favour recent content (last 30 days). For others, they cite articles from 2019. We can't find a pattern. It's not about the topic being evergreen versus newsworthy. It seems random.
The contradiction conundrum
We've seen AI systems simultaneously cite two sources with opposite advice. When we optimised to be more definitive, we lost visibility. When we acknowledged nuance and trade-offs, we gained it. But only sometimes.
Why waiting for certainty means missing opportunity
"Let's wait until AI search is figured out" sounds reasonable. It's also the same logic that had businesses waiting to build websites until the internet was "figured out."
Here's what waiting costs you:
The compound advantage: Early AI visibility compounds. As systems learn to trust certain sources, they cite them more. Those citations reinforce trust. The cycle accelerates. Starting later means competing against entrenched authorities.
The learning curve: Every failed experiment teaches us something. While you're waiting for the playbook, early adopters are writing it. Their failures become your competition's advantages.
The attention window: Right now, AI search has minimal competition. A mediocre optimisation effort today beats a perfect one in two years when everyone's competing.
The conversion goldmine: Even at <1% of search volume, AI users' conversion rates make this channel more valuable than many traditional sources. Would you ignore a channel that delivered 10x conversion rates just because it's small?
Our testing methodology (so you can verify our claims)
We don't expect you to trust our word. Here's exactly how we test:
Baseline establishment: We run 50 industry-relevant queries through ChatGPT, Claude, and Perplexity. Document current recommendations, sentiment, citation frequency.
Hypothesis development: Based on observations, we create specific, testable hypotheses. "Adding FAQ schema will improve citation rate by 20%" not "Schema might help."
Controlled implementation: We implement changes on test pages while maintaining control pages. Same domain, similar content, single variable difference.
Measurement cycles: We retest the same queries weekly for 8 weeks. Document changes. Look for patterns. Most importantly, note when nothing happens.
Statistical significance: We don't claim success from one improved citation. We look for consistent patterns across multiple queries and platforms.
What we think might work (but can't prove yet)
Based on patterns we're seeing but can't statistically validate:
Entity optimisation: Building clear entity relationships seems to help. Not keywords, but concepts. Making it crystal clear what your company does, who it serves, what problems it solves.
Platform-specific content: Creating content specifically for AI consumption (comprehensive glossaries, relationship maps, decision trees) separate from human-focused content.
Cross-platform consistency: Ensuring your message is identical across your website, social media, press releases, and third-party profiles. AI systems seem to verify information across sources.
Expert author profiles: Content attributed to recognised experts (with LinkedIn profiles, published work, speaking engagements) seems to get cited more. But correlation isn't causation.
Multimedia integration: Pages with relevant images, videos, and infographics seem to perform better. Perhaps because AI systems are increasingly multimodal? We're still testing.
The ethical considerations nobody's discussing
As we experiment with AI optimisation, ethical questions emerge:
Manipulation versus optimisation: Where's the line between helping AI systems understand your content and trying to manipulate their recommendations?
Transparency versus advantage: Should we share what works, potentially helping competitors? We've chosen transparency, believing a rising tide lifts all boats.
Accuracy versus advocacy: AI systems present recommendations as objective truth. When we optimise for visibility, are we compromising that objectivity?
Experimentation versus exploitation: Is it ethical to charge clients for experimental services that might not work? Only if we're completely transparent about the uncertainty.
Your experimental framework (if you're brave enough)
Want to run your own AI search experiments? Here's a framework:
Month 1: Baseline and hypotheses
- Run 50 queries about your industry through AI platforms
- Document who gets recommended and why
- Form 5 specific hypotheses to test
Month 2: Initial experiments
- Implement one change at a time
- Test weekly
- Document everything, especially failures
Month 3: Pattern recognition
- Look for consistencies across successes
- Note what consistently doesn't work
- Refine hypotheses based on data
Month 4+: Scale and systematise
- Create templates for what works
- Build processes for continuous testing
- Share learnings with your team
Remember: Most experiments will fail. That's not failure; it's data.
The investment question: Is experimental SEO worth it?
Let's talk money. Should you invest in something this uncertain?
The maths of experimentation: If AI search stays at 1% of volume but converts at 10x the rate, it's worth 10% of traditional search value. If it grows to 10% of volume (projected by 2027), it's worth the same as your entire current organic channel.
The cost of being wrong: If you invest £10,000 in AI optimisation and it fails, you've lost £10,000. If you don't invest and AI search becomes dominant, you've lost your entire digital future.
The hedge strategy: Don't bet everything on AI search. Allocate 10-20% of your SEO budget to experimentation. Enough to learn, not enough to hurt if it fails.
What we tell our clients (the honest version)
When clients ask about AI search optimisation, here's what we say:
"We don't know what consistently works. Nobody does. We're running experiments, documenting results, and learning rapidly. Some things we try will fail. Others might transform your business.
If you need guaranteed results, AI search optimisation isn't ready for you. If you can tolerate uncertainty in exchange for potential first-mover advantage, let's experiment together.
We'll be transparent about what we're testing, what's working, and what isn't. You're not buying results; you're buying experimentation and learning. The results, if they come, are a bonus."
About 30% of prospects run away. The other 70% appreciate the honesty and want to be part of the experiment.
The future we're building toward
Despite the uncertainty, we believe AI search optimisation will become crucial. Not because we have proof, but because the trajectory is clear:
- AI systems are improving rapidly
- User adoption is accelerating
- Conversion rates justify investment
- Early authority compounds over time
We're not optimising for today's AI search. We're building for the AI search of 2027, when it might handle 25% of queries. The experiments we run today, even the failures, inform the strategies that will dominate tomorrow.
Your choice: Pioneer or follower?
You have three options:
-
Wait for certainty: Let others figure it out. Implement proven tactics when they emerge. Safe but potentially too late.
-
Dabble cautiously: Run small experiments. Learn slowly. Minimise risk and reward.
-
Embrace experimentation: Accept uncertainty. Test aggressively. fail fast. Learn faster. Maybe win big.
There's no right answer. Only trade-offs. But history rewards pioneers who experiment during uncertainty, not followers who wait for guarantees.
Ready to experiment with AI search optimisation? Let's fail forward together.
Stay Updated with Our Latest Insights
Get expert HubSpot tips and integration strategies delivered to your inbox.