Back to blog
tutorials5 min

How to measure your brand presence in AI responses

Methodological guide to track and quantify your brand visibility in ChatGPT, Claude, Gemini, and Perplexity responses.

The problem is simple to state: unlike SEO, there is no "Google Analytics for AI." Nobody sends you a notification when ChatGPT recommends a competitor instead of you. Nobody tells you how many times per day Claude mentions your brand — or fails to.

Yet this data exists and can be measured. Here is how to get it.


The 4 key metrics of AI visibility

Before discussing methods, you need to define what you are measuring. Four indicators are essential for understanding your presence in language model responses.

1. Mention rate

The mention rate expresses the percentage of AI responses in which your brand appears, across a representative set of queries for your industry.

Example: If you query ChatGPT with 20 typical questions from your industry and your brand appears in 7 of them, your mention rate is 35%.

This is the baseline metric — the starting point for any AI visibility analysis.

2. AI Share of Voice

AI share of voice measures your mention rate relative to your competitors on the same queries.

Example: Across 20 queries, you are mentioned 7 times, Competitor A 12 times, and Competitor B 4 times. Your share of voice is 7 / (7+12+4) = 30%.

This metric contextualizes your performance. A 35% mention rate might seem solid — but if it's half your main competitor's rate, that's a warning sign.

3. Average position

When your brand appears in a list or recommendation, at what position does it appear? Being cited first in "here are the top solutions: A, B, C" is very different from being cited fifth.

Average position helps you understand whether you are considered a primary option or a secondary player in AI responses.

4. Coverage by model

Different LLMs do not mention you at the same frequency. ChatGPT, Claude, Gemini, and Perplexity each have distinct behaviors, different training data, and reference sources that can vary significantly.

Measuring your mention rate by model lets you identify your strengths and blind spots — and prioritize efforts where the gap is largest.


Manual method: testing your AI visibility yourself

It is entirely possible to start with a manual assessment, without any dedicated tool. The approach is straightforward.

Step 1 — Define a query set

List 15 to 20 questions your prospects would ask an AI assistant in your category. Examples:

  • "What is the best [your category] tool?"
  • "How do I choose a [your product type]?"
  • "What are the alternatives to [your main competitor]?"

Step 2 — Query multiple models

Ask each question to ChatGPT, Claude, Gemini, and Perplexity. Track in a spreadsheet whether your brand is mentioned, at what position, and in what context.

Step 3 — Calculate your metrics

Calculate your overall mention rate, your rate per model, and compare against 2 or 3 competitors.

Limitations of the manual method

  • Time-intensive — 20 queries × 4 models = 80 minimum queries
  • Not reproducible — LLM responses vary from session to session
  • Impossible to track over time without a very rigorous protocol
  • Selection bias in query choice

The manual method is useful for a one-off audit. For ongoing tracking, it quickly reaches its limits.


Automated method: continuous tracking

AI tracking tools automate the entire process: sending queries, collecting responses, extracting mentions, calculating metrics, and storing historical data.

The typical workflow of a platform like Mentova:

  1. You define a set of representative queries for your industry
  2. The platform automatically queries multiple LLMs at regular intervals
  3. Mentions of your brand and competitors are extracted and analyzed
  4. Metrics are aggregated into a dashboard
  5. Alerts notify you of significant changes

The main advantage is reproducibility: the same queries, asked the same way, at regular intervals, produce comparable data over time.


Ideal dashboard: what to track each week

MetricFrequencyWhat it reveals
Overall mention rateWeeklyGeneral health of your AI visibility
Share of Voice vs competitorsWeeklyYour relative position in the market
Mention rate by modelMonthlyWhere your blind spots are
Average position in listsMonthlyAre you cited first or last?
90-day trendQuarterlyDirection and impact of your actions
Queries with 0 mentionsMonthlyMissed opportunities to address

Benchmark: what is a good mention rate?

There is no universal standard — it depends on the sector, market maturity, and number of competitors.

Market typeLeaderChallengers
Concentrated (2–5 major players)40–60%15–35%
Fragmented (10+ players)15–30%5–15%
New market or recent brand5–10%Focus on progression

The real question is not "am I at 30%?" but "am I above or below my competitors, and in which direction am I moving?"


Conclusion

Measuring AI visibility is no longer optional for brands operating in sectors affected by AI adoption. It is a performance indicator in its own right, on par with organic traffic or Net Promoter Score.

The good news: the data exists, the methods are accessible, and the gap between brands that measure and those that don't is widening every month.

Ready to track your AI visibility? Mentova analyzes your brand mentions across ChatGPT, Claude, Gemini, and Perplexity in minutes.

Discover your AI visibility in less than 5 minutes

Free, no credit card required.