[citaition]

We're building the benchmark for AI visibility

Founded on independent research into how ChatGPT, Gemini, Perplexity, Google AI Overviews, and Google AI Mode recommend products and services.

AI visibility is the next channel

When your customers ask ChatGPT for a product recommendation, they do not get a list of ten blue links. They get a single narrative answer – a recommendation with reasoning, comparisons, and citations. If your brand is not part of that answer, you are invisible in the fastest-growing discovery channel on the web.

AI visibility is fundamentally different from search engine visibility. Google ranks pages. AI assistants recommend brands. Google shows you where to click. AI assistants tell you what to buy. The signals that drive AI recommendations – content depth per page, focused positioning, comparison content, substantive pricing pages – are different from the signals that drive traditional SEO. Schema markup, trust badges, blog posting frequency, and many of the tactics agencies rely on have weak or even negative correlation with AI visibility.

This is not a prediction. It is already happening. People are already asking ChatGPT, Gemini, Perplexity, and Google's AI surfaces for product recommendations, and the brands that appear in those answers are capturing attention, consideration, and revenue. The brands that do not appear are losing a channel they may not even know exists.

0
AI responses analysed per brand
0
citation URLs tracked per brand
0
provider disagreement rate

Why we built Citaition

We built Citaition because no existing tool provides the depth of intelligence agencies need to turn AI visibility into a measurable service line. Tools that track AI mentions exist, but they treat visibility as a single number – aggregating across providers, ignoring the competitive dynamics, and offering generic recommendations that don't account for a brand's specific situation.

Our approach started with research. We ran hundreds of queries per brand across ChatGPT, Gemini, Perplexity, and Google's AI surfaces to understand how AI recommendations actually work. We discovered that each provider behaves differently (OpenAI favours institutional sources, Gemini relies on review aggregators, Perplexity surfaces niche content). We found that competitor content can completely erase weaker brands from responses. We identified the specific content types – pricing pages, alternatives posts, documentation, community forums – that get cited and drive recommendations.

These findings are not marketing claims. They are the foundation of Citaition's diagnostic framework. Every scoring formula, every opportunity rule, every recommendation the platform generates is calibrated against this benchmark dataset. When Citaition tells you that a brand needs a more substantive pricing page or a specific competitor comparison post, that recommendation is backed by real citation data from AI responses across all five AI surfaces.

Our methodology

Citaition uses API-only access to query AI assistants. We do not scrape web interfaces or simulate browser sessions. Every query goes through the official API of each provider, and every response is captured programmatically with full citation data. This is important because API responses include structured citation URLs – the actual sources the AI used to form its answer – which are not always visible in the web chat interface. For practical guidance based on this methodology, see our guide on how to appear in ChatGPT recommendations.

We deliberately focus on web-search-enabled responses. When ChatGPT, Gemini, or Perplexity responds with web search turned on, the response reflects the AI's real-time understanding of brands based on current web content. This is the channel your agency can influence: publish better content today, and AI assistants can find and cite it in their next response. Training data responses, by contrast, reflect a months-old snapshot that cannot be directly influenced. We measure training data presence as a diagnostic dimension, but our actionable recommendations focus on the live search channel.

Citaition's marketing website is itself a live demonstration of the GEO best practices we surface for brands. Every page targets 1,000+ words of substantive content. Our pricing page includes detailed feature comparison tables. We publish comparison and alternatives content. We avoid thin landing pages, gated content, and the trust badge overload that our research shows has negative correlation with AI visibility. If our own platform cannot practice what it recommends, we have no business recommending it to anyone.

Research first, product second

Citaition started as a research project. Before writing a single line of product code, we spent months collecting and analysing AI responses to understand how LLMs actually form brand recommendations. The research – across ChatGPT, Gemini, Perplexity, and Google's AI surfaces – came first. The product was built to operationalise those findings.

This matters because the recommendations Citaition generates are not opinions or best guesses. They are calibrated against citation data from real AI responses across all five AI surfaces. When the platform tells you a brand needs a more substantive pricing page, that recommendation is backed by evidence showing which content types actually get cited and drive AI recommendations.

We are building Citaition for agencies because agencies understand distribution. A single agency managing 20 client brands adds 20 brands to the dataset with one account. This creates a compound data advantage: every brand analysed improves our benchmark dataset, which improves our diagnostic calibration, which improves the recommendations for every other brand.

Start building AI visibility as a service

Start your free 7-day trial

No credit card required

Frequently Asked Questions