At Google Marketing Live 2025, one theme came through loud and clear: the nature of search is changing.
In the age of Gemini, Google’s generative AI platform, it’s no longer about identifying keywords and matching them to optimized content. Instead, Gemini interprets the search holistically – attempting to understand why the user is asking the question, and then synthesizing a response.
That shift has huge implications for growth marketers and product teams. It means rethinking our mindset: we can no longer ask “What keywords are users searching for?” Instead, we need to ask: “What situations or scenarios does my product solve? What prompts will users use when they need help?”
This shift from keyword matching to prompt anticipation marks the emergence of GEO (Generative Engine Optimization). GEO isn’t here to replace SEO but it introduces a new layer of opportunity in how we craft our product positioning, content strategy, and surface visibility in LLM-driven search.
GEO vs. SEO: What’s the Difference?
Unlike SEO, which Google documents extensively (see their starter guide), GEO operates in a less explainable space. Generative AI models are black boxes. The way they reason, rank, and recommend content is still largely opaque. (Reference: Lakkaraju et al., “Quantifying Uncertainty in Natural Language Explanations of Large Language Models,” HBS Working Paper, 2023.)
SEO is about matching static content to well-defined queries. GEO is about adapting dynamic content to fluid, intent-driven prompts. And doing so in a way that makes your product more likely to be selected by the AI’s reasoning engine.
What a Harvard Research Paper Taught Me About GEO
This brings us to a paper that got me thinking more deeply about GEO: “Manipulating Large Language Models to Increase Product Visibility” by Aounon Kumar and Himabindu Lakkaraju (Harvard, 2024).
The authors investigated whether you can manipulate how a product is ranked in LLM-generated responses by inserting a strategic text sequence (STS) into its product information. They showed that by using a gradient-based optimization algorithm, they could generate short sequences of text that, when added to a product’s metadata or description, significantly increase its ranking in LLM outputs.
In their experiments with fictitious coffee machines, one product that previously didn’t appear at all in the LLM’s recommendations became the top-ranked product after inserting the optimized STS. Even for products already near the top, the STS made them more likely to be chosen as the first recommendation.
Why does this work? While the paper doesn’t unpack exactly how the LLM interprets these sequences, it points to an important possibility: LLMs can be influenced by small text fragments, even when they are not overtly promotional.
This adds another layer of thinking to GEO. If a strategic text sequence (STS) that looks like a string of random characters, say s83829bfsashvj, can influence ranking, you might wonder: Do I still need to thoughtfully design the dynamic content of my product pages?
I believe the answer is yes, and here’s why:
- STS alone doesn’t build trust. Even if an STS boosts your visibility, users still land on your content. Human-readable, scenario-relevant messaging is what convinces them to engage, convert, or return.
- You still need to align with user prompts. GEO isn’t just about tricking a model. It’s about understanding and mirroring the language and context your audience is actually using.
- STS is fragile; content is durable. Optimized sequences may work under specific conditions, but high-quality, scenario-based content will continue to resonate across prompts, tools, and surfaces.
- AI summaries like Gemini’s AI Overview need content to work from. If your site is referenced in generative search, it still needs structured, relevant language to be surfaced and cited well.
In short, while STS techniques offer an exciting signal, they are not a replacement for thoughtful messaging and product narrative. GEO works best when technical tactics and strategic content design go hand in hand. What STS really does is to raise the question of “can we trust the recommendations by the LLMs.”
Implications for Growth and Product Marketing
Coming back to the question of GEO. So what might this mean for growth teams?
- GEO could involve designing and testing product copy variations not just for users, but for LLMs.
- Teams could experiment with embedding context-rich, scenario-based language in product pages or FAQs that align with common user prompts.
- We need to think in terms of rankability in AI Overview, not just crawlability or indexation.
Of course, the ethical and performance implications are still being explored, but the strategic takeaway is clear: products can gain visibility in AI-native search not just through backlinks and structured data, but through smart, testable language placement.
What we can learn from Hima’s research is not a one-size-fits-all technique, but a new frontier of growth strategy: thinking of our product content not just as user-facing, but AI-facing. And like SEO before it, it will require a combination of insight, experimentation, and rigor.
How I Start Testing GEO

Recently, I started applying GEO principles to the learning product I manage at work: Future Proof with AI, an on-demand AI upskilling program for professionals and teams. Here’s what that looks like in practice:
Step 1: Understand how my audience searches
The most readily available insight came from support tickets and learner emails.
- “Is this on-demand? How long is the program?”
- “I want to upskill my team, but they’re already overloaded.”
- “What resource do you recommend on [specific topic]?”
- “Do you provide certificates?”
These queries revealed their true priorities: speed, flexibility, practical fit, and credibility.
Step 2: Reverse engineer prompts
From those user concerns, I generated prompts I believed they’d enter into LLM search tools:
- “What’s the best AI course for a time-crunched marketing team?”
- “Is there a fast way to train my team on AI basics?”
- “Can you recommend an AI learning course for industry leaders on how to use AI in strategy and other business core decision making?”
Step 3: Test my product language
Using these insights, I created a new FAQs page that emphasized:
- The on-demand, modular nature of the program
- The non-technical, executive-friendly focus
- Features like digital badges, real-world case studies, and short time commitment
I rewrote product descriptions to better reflect use cases: “for overloaded teams,” “learn in 60–90 minute modules,” “no coding required.”
Step 4: Use AI tools to simulate
I tested those prompts in ChatGPT. My product didn’t appear first. ChatGPT instead listed programs from INSEAD, MIT, and Wharton. When I asked why, it explained those options appeared:
- More tactical
- Faster to complete
- Better aligned with senior leadership goals
That feedback was incredibly valuable. It helped me:
- Reposition the product to better highlight tactical takeaways and team readiness
- Refresh content tone to emphasize speed and simplicity
- Ensure alignment with executive decision-making, not just strategic framing
Step 5: Iterate and document
Now, I keep a lightweight prompt log: what I tried, what surfaced, and how the content performed across ChatGPT. Like SEO or CRO, this is a living experiment.
The mindset isn’t “game the system” but to “match how people actually think, ask, and search.” That’s the real shift GEO invites us to make.
TL;DR
- LLM-powered search is shifting from keyword matching to prompt-based reasoning.
- GEO is emerging as a key strategy for increasing product visibility in AI-generated responses.
- Research from Kumar & Lakkaraju shows that even small, optimized text fragments (strategic text sequences) can influence LLM rankings.
- But STS isn’t a replacement for thoughtful product content. It’s a signal that LLMs are sensitive to structure and context.
- Marketers can start GEO by reverse-engineering prompts, testing content language, and observing how their product appears in tools like ChatGPT or Gemini.
- GEO isn’t just about ranking. It’s about better aligning with the way users think, ask, and decide in AI-native search.
P.S. If you’ve run any experiments around LLM visibility, prompt shaping, or AI Overview optimization, I’d love to hear what you’re seeing.