Prompt Intent: Analyzing How Gen-AI Engines Define User Intent


There is no doubt that the search industry is going through major shifts. For more than two decades, SEO professionals have optimized websites for Google’s crawling, indexing, and ranking algorithms. Success was measured by where a page appeared in the “10 blue links” or SERP features, and for the most experienced among us, it felt clear and straightforward.
In 2025, Generative Engine Optimization (GEO) emerged as a complementary discipline. Put simply, it is not totally different from SEO. For GEO purposes, there are evident tweaks and adjustments we can implement in our existing best practices..
Large-language models (LLMs) like ChatGPT, Gemini, Copilot, and Perplexity answer billions of queries every month, often providing a single synthesized answer instead of a list of links. If you want increased visibility in these platforms, SEO practices remain essential. For experienced SEOs, there are not many new “signals” in this process. However, there are changes in how these signals are interpreted and weighted.
Think of an SEO basic: Search Intent. Do generative AI engines see search intent the same way search engines do? Are they even evolved enough to analyze intent that deeply? I’m not sure.
A recent study by the National Bureau of Economic Research analyzed how people use ChatGPT and how Gen-AI engines see search intent. I want to explore this idea further and back it up with Similarweb data. After all, we’re all trying to build a successful SEO strategy here…
I’m going to check the new search intent idea, using data from OpenAI’s research and Similarweb’s AI analytics, to explain why “asking” intent dominates generative search, how generative engines distribute traffic, and how to prepare your brand for the GEO era.
From search intent to prompt intent: A new taxonomy of intent?
Why are we even having this discussion?
Since the study identified new search intents (or definitions) that are more relevant for LLM prompts, and can help us explain which page is mentioned or cited where. I can’t deny this is the sort of thing SEOs, or some of them, live for researching – and so, let’s see what the study, combined with Similarweb data and SEO experience, tells us.
Introducing ChatGPT prompt intents:
The first large‑scale study of ChatGPT usage analyzed 1.1 million messages sent to ChatGPT between November 2022 and July 2025. The researchers classified prompts into three broad intent categories:
Intent | Share of messages | Example query | Description & GEO takeaway |
Asking (49%) | Almost half of ChatGPT messages are conversational inquiries. | “Help me plan an SEO road map for an ecommerce site.” | These are high‑level informational requests where the user seeks guidance. The study found that asking messages have grown faster than other categories and receive the highest user satisfaction. |
Doing (40%) | Around two‑fifths of messages ask ChatGPT to perform a task or produce content. | “Write me a subject line for an outreach email.” | This is transactional or tool‑based intent. Doing prompts often involves drafting emails, summarizing texts, or generating code. |
Expressing (1%) | A small minority of prompts involve self‑expression. | Emotional or confessional statements | This category is currently tiny but may grow as chatbots become more empathetic. |
Two other findings stand out:
- Non‑work queries dominate: The study found that 70 % of ChatGPT conversations are unrelated to work. This means generative engines must serve everyday users seeking personal advice, recommendations, and learning.
- Content refinement is the top work use case: Among work‑related use cases, 42% of prompts involve writing tasks, and two‑thirds of those are editing, summarizing, or improving existing text. Users are bringing their own content to LLMs for refinement rather than asking for entirely new content.
This suggests that generative engines operate as intelligent editors, and that making high‑quality source material available is essential.
From informational to conversational intent
Historically, search intent has been divided into informational, navigational, and transactional (sometimes “Commercial” and “Jobs”) categories. We easily learned how to analyze these intents and create our SEO strategies around them.
OpenAI’s data shows that generative engines blur these lines. “Asking” intent, which often contains multiple sub‑questions or requests for guidance. This can be compared to informational queries, but it is more open‑ended and conversational, as many prompts are.
Unsurprisingly, the study concludes that “Asking” is the new “informational intent”.
Where’s the catch?
- Instead of ranking pages, LLMs select and summarize the most relevant sources to produce a single answer.
- Satisfaction metrics matter more than click‑through rates. The study notes that asking queries yields higher satisfaction, which, in return, means that generative engines will prioritize sources that help them deliver complete and accurate answers.
Prompt intent vs Search intent examples:
Let’s see how our old intent translates into new intent, shall we?
The following table compares 10 random long-tail queries of a website, and which search intent and prompt intent is matched to each of them. Did you think it’s a straightforward “intent migration”? Think again:
Keyword | Search Intent | Prompt intent |
il-24 promotes atopic dermatitis-like inflammation driving mrsa-induced allergic responses | Informational | Asking |
is there something in barbecue flavor seasoning that creates eczema flare | Informational | Asking |
what is the pathophysiology of asthma powerpoint presentation | Informational | Asking |
one of my nostrils gets blocked when i am in my bedroom | Informational | Expressing |
dupixent chronic rhinosinusitis with nasal polyposis | Navigational | Asking |
how often does dupilumab administration to humans | Navigational | Asking |
pediatric dosing for dupixent asthma indication | Navigational | Asking |
how long before injection to take out dupixient | Transactional | Asking |
nonpharmacological treatment options for asthma | Informational | Doing |
why is my 6 year old’s skin so bumpy and pimply | Informational | Asking |
What do we learn from this table?
- Search intent and prompt intent don’t always match. Transactional queries can be classified as “asking” if the rest of the prompt fits the structure, while informational search queries are defined as “doing” prompt queries.
- Prompt intent is fluid. A difference of one or two words can shift intents, but that’s not really news. However, with such long prompts (longer than the table above many times), the intent must vary throughout the prompt.
- Successful qualification is key to success. Getting AI and search traffic now relies on SEOs’ ability to analyze and understand the gaps, as well as bridge them.
GEO strategies: how to optimize for Gen-AI prompt intent
As the research shows, each intent type has its own set of common answer types that are used for it. The answers aim to provide just as much information as needed, and the exact sources and resources needed for the user to complete the “task” they came to ChatGPT to do. Be too comprehensive with the wrong “prompt intent”, and the user will lose interest before they reach the end.
Here are the principles of optimizing for ChatGPT’s search intents, according to the study:
- “Asking” queries: provide comprehensive, context‑rich information that LLMs can synthesize into coherent answers.
- “Doing” queries: Offer templates, scripts, and structured data that generative engines can use as building blocks.
- “Expressing” queries: Build content that demonstrates empathy and understanding for potential future use cases.
1. Optimize for “asking” intent with conversational content
For asking queries, provide comprehensive, context‑rich information that LLMs can synthesize into coherent answers. Since asking intent accounts for nearly half of AI queries and is associated with higher satisfaction, brands need to produce content that answers complex, multi‑part questions.
The important thing is to anticipate user journeys and address them conversationally:
- Map question paths: Identify clusters of related questions your audience might ask. For example, instead of targeting “flights to Rome,” cover broader journeys like “planning a trip to Italy”. Provide step‑by‑step guidance (destinations, budgets, cultural tips) so LLMs can extract the relevant pieces.
- Use natural language and dialogue: Write in a conversational tone, using complete sentences and rhetorical questions. LLMs favour content that reads like human dialogue.
- Include context and background: Provide definitions, historical context, and examples. Generative engines aim to produce comprehensive answers, so they will rely on sources that offer depth and nuance.
- Highlight authoritative sources and experts: Quote subject‑matter experts, cite credible studies, and link to trustworthy references. The OpenAI study suggests LLMs favour sources that contribute to complete, satisfying answers.
2. Optimize for “doing” intent with useful, structured assets
For doing queries, requests to write, summarize, or create, brands can become the engine’s tool builders. For example:
- Publish templates and examples: Offer ready‑to‑use email templates, report outlines, or code snippets with permissive licences. LLMs can incorporate these structures into their output.
- Use schema markup and code formatting: Structure your content with schema markup (e.g., How‑To, FAQ, Recipe) and ensure examples are formatted clearly. Structured data makes it easier for AI models to parse and reuse your content.
- Create interactive tools and calculators: Dynamic tools (trip calculators, budget planners, etc.) can be integrated or referenced by generative engines.
3. Optimize for “expressing” queries with empathy
For expressing queries, create content that demonstrates empathy and understanding for potential future use cases. Despite their small share today, these queries may grow as chatbots become more empathetic, and there are some ways to make your brand content suitable for this form of intent.
- Develop empathetic content and tone: “Expressing” prompts are often confessions or emotional statements. Create content using first‑person stories, examples, and inclusive language to show that you understand users’ perspectives and feelings.
- Create safe, interactive spaces: Encourage readers to share their own experiences through moderated comment sections, forums, surveys, or user‑story submissions. Community pages, Q&A boards, and social‑listening threads let people express themselves in a supported environment.
- Offer reflective tools: Provide downloadable journaling templates, self‑assessment checklists, or guided exercises. These assets give users structured ways to express emotions or track progress, and they signal to AI engines that your site is a helpful resource for self‑expression, not just information or transactions.
4. Incorporate AIO keywords and optimize for AI Overviews
Google’s AI Overviews (AIO) appear more and more in SERP, providing users with full answers to every search. Find and analyze the keywords that appear in AI Overviews and incorporate them into your pages if you want to appear in them, increase your visibility, and potentially get traffic. Steps include:
- Identify AIO keywords using Similarweb’s Keyword Research tools. Simply enter your competitor’s domain to see their keywords, then filter to show only keywords with SERP features.
Let’s see how it looks when I enter one of my pest-control website competitors in:
Using this view, I can easily extract the relevant keywords for my website and start optimizing for AIO. - Optimize content for those keywords using SEO & GEO best practices: clear headings, bullet lists, succinct definitions, and up‑to‑date statistics.
- Monitor AIO performance with our rank tracker. By tracking your target keywords, you’ll get daily updates about their appearance in AIO. You’ll be able to know which of your pages appear in AI Overviews for which queries, and adjust as necessary.
- Perform a thorough AI competitive analysis to see competitors’ AI traffic and analyze the prompts that send them actual traffic
5. Support AI “ranking” systems with authoritative signals
Invest in signals that AI engines care about, like building brand entity citations and authoritative backlinks. These signals help AI systems trust and reference your brand, much like in SEO (again, no surprise). I learned from experience how valuable backlinks can be for SEO (granted, the right backlinks).
Yes, websites can rank well without backlinks in some cases. But, if you ask me, when it comes to ranking #2 or moving up to #1 for highly competitive terms, this is not the case. When we’re reaching for the absolute top ranking position, often the SEO’s ability to acquire quality backlinks will be the skill that will tip the scales in our favor.
- Earn backlinks from reputable sites: High‑quality backlinks remain a cornerstone of both SEO and GEO. I suggest analyzing competitors’ backlink profiles, taking advantage of weak spots (lost links), and replicating what works. Digital PR, guest posts, and partnerships can secure citations from respected publications.
- Ensure consistent brand mentions: Even unlinked mentions help AI systems understand your brand entity. List your business in directories, publish press releases, and appear in industry round‑ups to signal relevance.
- Strengthen on-site signals (internal linking matters too): Internal linking and schema markup help search engines and AIs interpret your site structure. FAQs, How‑To schema, and product schemas can increase the likelihood of being cited.
6. Track and refine consistently
GEO, like SEO, is iterative. It’s important to track your changes and monitor their impact on your performance in AI engines. It allows you to spot things that work and double down on them, and things that don’t work, so you can tweak and test them. Just add the following steps to your weekly SEO/GEO analysis:
- Measure AI visibility. Identify topics where your brand appears in ChatGPT answers and topics where it doesn’t. Benchmark against competitors.
- Analyze AI traffic. See which landing pages receive traffic from ChatGPT, Perplexity, or other chatbots. Identify the prompts driving those visits.
- Test and refine. Adjust your content, templates, and backlinks based on what the data shows. Remember that GEO is dynamic, and trends evolve quickly.
How to measure visibility and traffic in gen-AI engines with Similarweb
1. Similarweb’s Gen AI visibility tool
Generative search is still a mystery for many SEOs and marketers: It’s challenging for brands to see where they are mentioned or how much traffic flows from AI responses. To bridge this gap, we launched our AI Brand Visibility Tool in June 2025.
Using this tool, I can check and monitor how often my website appears in ChatGPT answers for specific topics, and compare my visibility to my competitors. I can analyze prompts that trigger answers with my competitors and optimize for them myself. I can analyze citations by topics and see which websites are getting the most mentions at any given time.
This data allows me to benchmark my performance in AI and manage my GEO efforts based on potential reach and exposure.
Real-world usage example:
I’m doing SEO for a pest control website, and I’d like to get more mentions and citations in AI engines. This is how I’d research my visibility and competition:
- Visibility tracker: Set up my tracking topics, and check my website’s presence in ChatGPT answers. Later on, this will provide the overview for the entire activity.
- Prompt analysis: Next, I’ll analyze the actual prompts behind the answers. Here I can see what the users ask, how the AI responds, and which sources are linked from each answer. I can analyze each of these prompts’ answers, and the sources linked from them, and come up with a plan on how to get some visibility myself (i.e, expand my AI market share).
- Cite source analysis: Here, I can see which sites and sources appear in AI answers for questions that users ask about my core topics, and how many times.
This feature also allows me to tweak my GEO strategies: by visualizing the domain’s appearance frequency for each topic, it allows me to choose to focus my efforts on less competitive topics, thus gaining more visibility where there’s less competition.
In the citation analysis below, I can see that the “Mosquito prevention” and “Pest control” topics are dominated by very strong domains. However, I can also see that “Wildlife removal” is less competitive, and might allow me better chances to sneak into the citation mix and gain some visibility quickly.
- Competitive benchmarking: compare your visibility against top competitors and monitor trends.
Fact: Hundreds of millions of people use Gen AI tools daily. Brands risk losing visibility if they are not included in AI‑generated answers. This is not saying “forget Google”, this is saying “start paying attention to new traffic sources”.
2. AI Chatbot Traffic: Closing the loop between visibility and clicks
Visibility is only part of the picture. Generative engines also refer traffic to websites when they cite them as sources. Our AI Chatbot Traffic Tool fills that gap by allowing users to easily track how much GenAI traffic their sites (or their competitors) receive.
This saves the time and effort of setting a complex REGEX filter in GA4, plus it’s automatically updated when new gen-AI engines emerge. With this tool, I can type any domain address into the search bar and see its AI traffic, like this:

What can you measure with the AI Chatbot traffic tool?
- Track traffic from ChatGPT, Perplexity, and other AI chatbots (as shown above).
- See which landing pages are gaining the most from AI chatbots. You can also see what each chatbot’s traffic share is for each page of your site. This data can help you decide which optimization routes to take and how to implement your overall optimization strategy.
If you recognize types of content that get more traffic from a specific bot, you can act on it.
If you think a specific page should be getting more traffic from one of the engines, you can act on it, analyze it, and optimize.
This is one of my personal favorite views, I must say:
- Tweak your GEO strategy using our “top prompts” feature, and adjust your content to get more AI visits.
- Analyze your competitors’ AI traffic: Easily perform competitive AI analysis on any competitor website, check their traffic, and analyze their prompts side-by-side.
Can you measure traffic from AI engines? Similarweb data says yes
Did you know we actually use our own platform at Similarweb? Here are some recent articles with insights based on the AI chatbot traffic data:
- Top sites receiving AI referrals: In a list of the “Top 50 Sites Getting Traffic from AI chatbots”, I wrote about AI traffic winners and implications for SEO. Data showd that YouTube received the most AI referrals, with Amazon and The Guardian leading in their respective categories. In the post, I detailed which platform leads where, shared some interesting prompts, and traffic trends from AI websites.
- Mental health referrals: One of our August 2025 insight articles highlights that when users ask ChatGPT and its competitors about depression or finding a therapist, the AI tends to funnel traffic to mental‑health sites such as Psychology Today, VeryWell Mind, and 7Cups. While the use of general‑purpose chatbots for mental‑health advice remains controversial, it is an emerging trend where generative engines act as referral hubs.
- Coming soon: AI traffic benchmarks by industry: Did you know that Similarweb’s AI traffic reports allow segmentation by industry? Our data shows that industries with rich informational content, media, ecommerce, health, and travel receive the largest share of AI referrals. Conversely, industries lacking authoritative content or strong brand entities see little AI‑driven traffic.
This shows you that AI referrals are real and measurable (“if you build it, they will come”). Though still small relative to traditional search traffic, generative engines improve at citing sources, indicating that their referral traffic share is likely to grow.
Looking ahead: From search intent to prompt intent
The evidence from OpenAI’s study shows that generative models don’t fit neatly into the classic “informational, navigational, transactional” buckets. Instead, user needs are better captured by prompt intent, whether they’re asking (conversational inquiries), doing (task‑oriented requests), or expressing (emotional statements).
Nearly half of all prompts are “asking” queries, which behave like a more open‑ended, conversational form of informational intent. This is basically what you’ll find above Top-of-The-Tunnel, but that’s no reason to disrespect it. Those add up to half of the total prompts in ChatGPT. Generative engines are synthesizing information instead of ranking links. Satisfaction metrics, like how complete and useful the answer is, matter more than click‑through rates.
As I see it, this shift has three big implications:
- Intent is fluid. Prompts are journeys, not keywords: Research must anticipate and include multi‑part (fan-out) questions and tasks rather than single keywords.
- “Asking” prompts often chain several queries together.
- “Doing” prompts require ready‑made templates or data sources.
- “Expressing” prompts call for empathy and authenticity.
Understanding these prompt types helps you optimize content better for LLM consumption.
- Visibility and traffic depend more on citations: Generative engines choose and synthesize sources. Being cited in a single answer can drive meaningful traffic, as our Gen-AI tools show. Building authority (through quality content, structured data, and trusted backlinks) increases the chance your brand will be referenced.
- Continuous adaptation is required: As models integrate more user context and as vertical AI assistants emerge, prompt intent taxonomies will evolve. SEO’s must monitor which prompts surface their content and adjust strategies accordingly, treating GEO as an extension of SEO rather than a replacement (sorry, not sorry, Gurus).
In short, AI engines redefine “search intent” into what can only be called “prompt intent“, with conversational “asking” taking center stage. SEO’s who learn how to analyze and optimize for them will be the first ones to provide the right answers, structured assets, and trustworthy signals, and will be well‑positioned for a future where AI intermediaries decide which sources inform the answers people see
Conclusion: A call to action for the next generation of SEOs
The increased usage of Gen AI engines represents both a challenge and an opportunity for the industry. Generative engines changed the unit of optimization from a list of ranked links to a single synthesized answer. To succeed, SEO’s must understand the dominant intents (especially “asking” intent), produce conversational and structured content, and build authority signals that LLMs can recognize.
Still feeling out in the dark? Our AI Brand Visibility tools and AI Chatbot Traffic tool can provide you with the metrics (and most accurate data) necessary to measure progress.
FAQ
What is the difference between “search intent” and “prompt intent”?
Traditional search intent groups queries into informational, navigational, and transactional buckets. Generative engines like ChatGPT instead classify prompts by how the user engages: Asking (conversational inquiries), Doing (task requests), and Expressing (emotional statements).
Why is “Asking” considered the new informational intent?
“Asking” prompts are high‑level, conversational questions such as “Help me plan a 5-day trip to Italy.” Answers to these questions tend to be long, comprehensive, and include many sources. Users rate answers to “Asking” prompts more highly, so generative engines prioritize content that helps them deliver complete, satisfying responses.
How do “Doing” and “Expressing” intents differ from “Asking”?
“Doing” prompts ask the model to perform a task or produce content (e.g., “Write me a subject line for an outreach email”). “Expressing” prompts involve self‑expression or emotional statements. While “Doing” prompts need structured assets or templates, “Expressing” prompts call for empathetic, user‑centered language.
How should I optimize content for the “Asking” prompt intent?
Anticipate the query fan‑out: Asking prompts often chain several questions together. Provide comprehensive, context‑rich information (definitions, examples, step‑by‑step guidance) that LLMs can synthesise into a single answer. Write in a conversational tone and cite authoritative sources. Because “Asking” prompts yield the highest satisfaction, content should aim to be complete and trustworthy.
What strategies help with “Doing” prompt intent?
“Doing” prompts often require templates, scripts, or other assets. Offering ready‑made email templates, outlines or calculators gives LLMs building blocks to complete a task. Use schema markup (How‑To, Recipe, FAQ) so AI models can parse and reuse your content. Structured, task‑oriented resources help generative engines fulfill Doing requests efficiently.
How can I support “Expressing” prompt intent?
Brands can prepare for “Expressing” prompts by creating empathetic content. This means sharing first-person stories, using inclusive language, and creating safe spaces (such as forums or comment sections) that encourage users to share their experiences. Also offering reflective tools such as journaling templates or self‑assessment checklists signals that your site is a supportive resource for self‑expression.
How can I measure my brand’s visibility in generative AI engines?
Similarweb’s AI Brand Visibility Tool shows how often your brand appears in ChatGPT answers for specific topics and lets you benchmark against competitors. It identifies which sites drive citations and reveals the prompts behind those answers, giving actionable insight into which topics and question types you should target.
Why do satisfaction metrics matter more than click‑through rates in GEO?
In generative search, the engine synthesises one answer rather than serving a list of links. The OpenAI study found that “Asking” prompts yield higher satisfaction, and generative engines will likely favour sources that help them deliver helpful, complete answers. Optimizing for user satisfaction means providing authoritative, well‑structured information rather than chasing clicks.
Want more like this?
Want more like this?
Insight delivered to your inbox
Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.
By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy


By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy
Other content you may be interested in
Categories
Categories
Categories

Want more like this?


Want more like this?
Insight delivered to your inbox
Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.
By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy