AI engines do two separate jobs for local businesses. When a customer asks "recommend me a dentist in Sydney," the engine cites who you are: your homepage, your directory listings, your Google Business Profile. When the same customer asks "how much does a root canal cost," the engine cites what you wrote: your articles, your pricing pages, your guides. Two different systems. Two different sets of winners. We know this because we analysed approximately 6,200 citations across 12 markets and 5 industries.

Most businesses are visible to only one of these systems. A dental clinic with a strong Google Business Profile gets recommended by name, but when patients ask about procedures or costs, it disappears. A financial adviser who publishes guides on pension transfers shows up in every explanation query, but when someone asks "best financial adviser in London," the engines return a completely different list.

Last updated: April 2026

~6,200 Citations analysed
12+ Markets tested
5 Industries
3 AI engines
Key findings
  • AI engines run two systems: one for "recommend me a provider" (entity-driven) and one for "help me understand something" (content-driven)
  • The type of question asked is a stronger predictor of who gets cited than the city, niche, or engine
  • For recommendations: homepages account for 45-66% of provider citations. Blog articles: 2-10%.
  • For explanations: articles account for 40-86% of provider citations. Homepages: 0-37%.
  • Businesses that produce content for both systems earn disproportionately more citations across both. In one market, 5.3x more.
  • Each market has its own third-party citation ecosystem. There is no universal playbook.

What we did

Four datasets, collected between March and April 2026. Different markets, different industries, different levels of granularity. Combined, they cover approximately 6,200 citations from all three major AI engines.

AU real estate, 6 cities (v1): 324 queries across Sydney, Melbourne, Perth, Brisbane, Adelaide, and Gold Coast. Three engines. 2,315 total citations. We classified every cited domain by ownership type (provider, directory, media, government) across all 6 cities, and ran URL-level page-type classification for Sydney. The full Australia experiment is published separately.

v2 cross-market study: 270 queries across 5 markets (Miami real estate, Sydney dental, Bali wellness, London finance, Singapore legal). Three engines, 7 intent categories, 3 prompts per category per market. 1,468 citations after deduplication (1,554 raw). Page-type classifications were corrected on 7 April 2026 after an adversarial reclassification flagged 185 URLs that had been misclassified as homepages. They were actually articles or service-specific pages (pricing, retreat packages). After correction, the content signal for informational queries is substantially stronger than we originally reported.

Phuket real estate: 60 queries (20 prompts across 3 engines). 568 citations. 94 agency domains identified. The full Phuket experiment is published separately.

Bali real estate: 1,778 citations across 5 engines (including Grok and Anthropic). 11 intent categories. Data from an active client audit.

London IVF: 60 queries across 3 engines. 19 clinics. Published separately as our London fertility clinic experiment. Not included in the page-type analysis below (no URL-level classification), but contributes to the cross-engine agreement comparison.

Limitations
  • Small sample per intent per market in v2 (6-12 prompts per category). These are directional findings, not statistical proof.
  • ChatGPT model version was not recorded. ChatGPT behaviour varies significantly by model. A Writesonic study found 8% vs 56% brand-site citation rates across GPT-5.3 and GPT-5.4.
  • Gemini redirect URLs hide actual page paths. 24.6% of v2 citations (361 of 1,468) have unknown page type because of Gemini redirects.
  • AU real estate 6-city data is domain-level only for 5 of 6 cities. No URL-level page type outside Sydney.
  • Phuket per-intent ownership uses estimated values. Per-prompt cross-tabulation was not available.

The two systems

Ask an AI engine to recommend a provider and it cites who you are. Ask the same engine to explain a topic and it cites what you wrote. Same engine. Same market. Completely different citation behaviour.

We excluded Gemini from the page-type percentages below because Gemini redirect URLs classify all citations as "homepage" regardless of the actual landing page. Perplexity gives the cleanest picture and confirms this split. ChatGPT data aligns with the pattern but has lower citation volume.

System 1: Recommendations

Queries like "best dentist in Sydney" or "recommend a financial adviser in London." The engine cites who you are. Homepages dominate.

What engines cite for recommendation queries
Page type Share of provider citations (non-Gemini) Source
Homepage45-66%v2 corrected: generic-rec 66%, need-specific 59%
Directory listing6-27%v2 + AU RE
Google Maps card0-12%ChatGPT artifact
Blog article2-10%v2 corrected: generic-rec 5%, need-specific 10%
Location/service page5-15%v2 corrected combined

Homepages account for 45-66% of provider citations on recommendation queries. Blog articles sit at 2-10%. If a business has invested exclusively in content marketing, it barely registers when someone asks for a recommendation by name.

System 2: Explanations

Queries like "how much do agents charge" or "what should I look for in a dentist." The engine cites what you wrote. Articles and pricing pages take over.

What engines cite for informational queries
Page type Share of provider citations (non-Gemini) Source
Blog article34-86%v2 corrected: comparative 86%, advisory 41%, cost-fees 40%
Pricing/fees page0-36%v2 corrected: cost-fees 36%
Homepage0-37%v2 corrected: comparative 0%, cost-fees 4%, advisory 37%

For comparative queries ("buyer's agent vs regular agent"), articles account for 86% of provider citations. Homepages drop to 0%. The homepage that dominates recommendation queries is invisible for explanations.

The v2 reclassification made this sharper. Before we corrected those 185 misclassified URLs, the content signal looked moderate. After correction, it is overwhelming for certain intent types.

The intent gradient

City matters less than you would expect. Engine matters less than you would expect. The strongest predictor of who gets cited is the type of question the customer asks.

Provider-own citation share (the percentage of citations going to actual businesses rather than third-party sites) decreases predictably as queries shift from "recommend me someone" to "help me understand something."

Provider-own citation share by intent type
Intent type v2 cross-market AU RE Sydney Phuket Bali RE
Need-specific78%73%~55%79%
Specialist73%N/A~50%88%
Generic recommendation63%23%~69%63%
Area-specific63%68%~64%N/A
Advisory60%0%~50%68%
Cost/fees56%0%~48%81%
Comparative53%17%N/AN/A
Key finding
0-78%
Intent varies provider-own share
City only varies 68-84% in AU real estate 6-city data. The type of question asked is the single strongest predictor of who gets cited, stronger than city, niche, or engine.

AU RE Sydney is the outlier for generic-recommendation queries (23% provider-own). Australia has a mature directory and listicle ecosystem that most other markets lack. Ratemyagent, whichrealestateagent, and top10realestateagent collectively account for hundreds of citations across 6 Australian cities. That infrastructure pulls provider-own share down for generic recommendation queries specifically. It is a market-specific feature, not a universal pattern.

Bali RE shows the opposite pull. Cost/fees queries in Bali return 81% provider-own citations, where the same intent type in Sydney returns 0%. Our client in Bali publishes pricing guides. Most Sydney agents do not.

The proof: one client, two systems

One of our clients operates in a single market with one website. Same engines test it. Same brand. The citation profiles split cleanly by question type.

For high-intent recommendation queries ("recommend a provider in [city]"), homepage and about page citations accounted for 22 of 25 total (88%). Articles produced 2 of 25 (8%). The engines recommended the brand as an entity.

For informational queries ("how much does X cost," "what should I look for in a Y"), articles accounted for 41 of 45 citations (91%). The homepage produced 1 of 45 (2%). The engines treated the same website as a reference library.

The same client's homepage accounted for 88 per cent of recommendation citations. The same client's articles accounted for 91 per cent of informational citations. Same brand, same engines, two different systems.

This is not two strategies competing with each other. It is two systems operating independently on the same brand. The same website wins both, through different pages, for different questions. Businesses that only invest in one side leave the other completely open.

Content compounds into recommendations

Businesses that publish explanatory content do not just win informational queries. They win more recommendation queries too.

In our Phuket real estate dataset (568 citations, 94 agency domains), content-rich agencies averaged 13.7 citations each. All other agencies averaged 2.6. That is a 5.3x gap.

82% of the top-cited domains had editorial content: blogs, guides, market reports. None of the listing-only sites ranked in the top tier. Zero.

Why content might compound

Our working hypothesis: articles build entity familiarity. The more an engine encounters a brand in informational contexts, the more likely it becomes to surface that brand for recommendation queries too. If this holds, businesses that invest in explanations win more recommendations as a side effect. The correlation is clear in our Phuket data. The causal mechanism is plausible but unproven.

Two caveats. This finding comes from one market (Phuket real estate). The pattern is directional, not confirmed cross-market. And one agency in this dataset has deployed an llms.txt file, the first in-the-wild signal we have observed of a business deliberately signalling to AI engines.

Each market has its own third-party playbook

The advice to "get listed on third-party sites" assumes those sites exist. In some markets they do. In others they barely exist at all.

Third-party citation ecosystems by market
Market Third-party % Dominant type What drives it
AU real estate (6 cities)25%Directories/listiclesratemyagent (64 cit.), whichrealestateagent (62), top10realestateagent (51)
Phuket real estate42%Legal-advisory + governmentThai property law complexity. Highest third-party rate.
Miami real estate35%Directories + blog-mediaUS RE directory infrastructure
London finance40%Blog-media + directoriesFinancial media (ftadviser, moneyhelper)
Singapore legal27%Directories + governmentGovernment licensing body (MLAW) cited as authority
Sydney dental20%MinimalThinnest ecosystem. Provider sites = 80% of citations.
Bali wellness32%Review platforms + directoriesTripAdvisor, BookRetreats

In the Australian real estate data, we identified 274 listicle and directory citations across 6 cities. The top 3 domains appear in all 6 cities. Perplexity is the heaviest listicle citer. ChatGPT rarely cites listicles; it pulls from Google Maps cards instead.

Sydney dental sits at the other end. Only 20% third-party. Provider websites account for 80% of all dental citations. If you are a Sydney dentist, third-party placements are near-irrelevant. Your own site is the entire game.

The listicle question

Ahrefs published research on 26,000 URLs across 750 prompts showing listicles at 43.8% of ChatGPT commercial citations. Our local-services data does not replicate this. Provider sites account for 65-93% of recommendation citations across our v2 markets.

The Ahrefs finding may be specific to B2B and SaaS queries. Or our sample may be too small to contradict it. Both are possible. What we can say: the advice to "get on listicle sites" requires checking whether listicle sites actually exist in your market first.

The engines run different systems

Each engine has distinct citation behaviour. Treating them as interchangeable is a mistake we see constantly.

ChatGPT: the Google Maps engine

In our Australian real estate data, 105 Google Maps citations accounted for 69% of ChatGPT's third-party citations. In Phuket, the pattern held: ChatGPT pulled almost entirely from Google Business Profile cards.

ChatGPT visibility requires strong Google Business Profiles plus reviews. Content quality matters less for ChatGPT than for either Perplexity or Gemini. If your Google Business Profile is incomplete, ChatGPT probably cannot see you at all.

One caveat that matters: we did not record the ChatGPT model version. A Writesonic study of 50 prompts found brand-site citation rates swinging from 8% on GPT-5.3 to 56% on GPT-5.4. Our data blends whatever model was default at test time.

Gemini: the citation machine

Gemini produces the most citations by volume. In Phuket, it generated 57.6% of all citations (327 of 568), averaging 16.4 citations per prompt. For foreign-ownership queries specifically, that average climbed to 25.2 per prompt, which is 6.6 times ChatGPT's rate for the same queries.

Across the 6 Australian cities, Gemini accounted for 1,082 of 2,315 total citations (47%). Provider-own citation rates were consistent at 67-85% regardless of city.

Gemini's redirect URLs prevent anyone from seeing actual page paths. Researchers who rely on URL parsing will overcount homepages. We excluded Gemini from our page-type breakdowns for this reason.

Perplexity: the most balanced

Perplexity is the most content-responsive engine. For recommendation queries, it cited articles at 17-28%, compared to under 4% on other engines. It is also the heaviest listicle citer in the Australian data.

Citation volume is consistent: 6-10 per response regardless of intent type. And Perplexity gives full URL paths, no redirects, no obfuscation. That makes it the most reliable data source for understanding what page types actually get cited.

Engine comparison
Engine AU RE total AU RE own % Phuket total Phuket avg/prompt Key behaviour
Gemini1,08276.2%327 (57.6%)16.4Broadest coverage
Perplexity77277.3%156 (27.5%)7.8Most content-responsive
ChatGPT46167.0%85 (15.0%)4.3Google Maps dependent

Cross-engine agreement is near zero

Does the same domain appear on all 3 engines for the same prompt? Almost never. We measured this across all markets where we had per-prompt data: the 6 Australian cities (counted individually), our 5 v2 markets, plus Phuket and London IVF separately.

Cross-engine agreement by market
Result Markets
0% agreement11 of 13 market-city combinations
26% agreementPhuket RE
79% agreementLondon IVF

Eight out of ten markets showed zero cross-engine agreement. The two exceptions share a specific trait: both have tiny English-language webs with either a single regulator (the HFEA for London IVF) or a closed expat ecosystem (Phuket). Our revised hypothesis is that convergence is driven by regulatory centralisation, not market size.

Bali wellness should have converged if market size were the driver. It has a similarly small tourism economy to Phuket, with only 22 unique domains cited across all engines. But agreement sat at zero. Whatever makes Phuket and London IVF different, it is not the number of businesses competing for attention.

From our own audit data (69 prompts, 470 unique domains across 4 engines): 64.9% of cited domains appear in only one engine. Being visible on one engine does not mean you are visible on the others.

What this means

1. Sort by question type first. A single strategy fails because these engines use different systems depending on what the customer asked. Recommendation queries need entity signals. Informational queries need topic-specific content. Start by identifying which type of question your customers ask most.

2. For recommendations: homepage quality, directory listings, Google Business Profile completeness, and reviews are the primary levers. Content has limited direct impact (2-10% of recommendation citations). But content has a strong indirect effect: the businesses that publish explanatory content also win more recommendations. In Phuket, we measured a 5.3x citation gap between content-rich and minimal-content businesses.

3. For explanations: topic-specific articles are the primary lever. Perplexity is the most content-responsive engine at 17-28% article citation rate. Build pricing and fees pages as a distinct content type rather than burying fee information inside longer articles.

4. Map your market's third-party ecosystem. The "get listed on third-party sites" advice only works in markets where those sites exist and where engines actually cite them. Australian real estate has ratemyagent. London finance has ftadviser. Sydney dental has almost nothing. Check what exists in your specific market before investing in third-party placements.

5. Plan for multi-engine visibility. Being visible on ChatGPT does not mean you are visible on Perplexity. ChatGPT runs on Google Maps data. Perplexity responds to content. Gemini cites the most sources but hides URL paths. Each engine requires different inputs, and 64.9% of cited domains appear on only one engine.

What we don't know yet

ChatGPT model version split. Different models show dramatically different citation behaviour, and our data blends whatever model was default at test time. Until someone runs the same prompts across GPT-4o and GPT-4.5 simultaneously, all ChatGPT citation research carries this asterisk.

Whether the compounding pattern (content building recommendation visibility) holds outside Phuket. We have one market where content-rich businesses earn 5.3x more total citations. That could be universal or it could be specific to small tourism markets with thin competition.

Whether these patterns are stable over time. This is a single snapshot from April 2026.

Why Phuket converges at 26% when Bali (a similarly small tourism market) shows 0% agreement. Our regulatory-centralisation hypothesis is plausible but unproven.

How Gemini's redirect URLs distort page-type analysis across the dataset. We excluded Gemini from page-type breakdowns, but 24.6% of v2 citations sitting in an "unknown" bucket is a meaningful limitation.

FAQ

How do AI engines decide which local businesses to recommend?

Based on approximately 6,200 citations, we found AI engines use two separate systems. For "recommend me a provider" queries, engines cite homepage and entity signals: directory listings, Google Business Profiles, reviews. For "explain this topic" queries, engines cite topic-specific content: articles, guides, pricing pages. The type of question matters more than the city or the engine.

Do ChatGPT, Perplexity, and Gemini recommend the same businesses?

Almost never. Across 10 markets, 8 showed 0% cross-engine agreement (same domain cited by all 3 engines for the same prompt). Only Phuket real estate (26%) and London fertility clinics (79%) showed meaningful overlap. 64.9% of all cited domains appear in only one engine.

Does publishing articles help businesses get recommended by AI?

Directly, articles account for only 2-10% of recommendation-query citations. Indirectly, yes. Content-rich businesses in our Phuket dataset earned 5.3 times more total citations than minimal-content businesses, including recommendations. Content builds the entity signal that makes engines more likely to recommend you.

Is getting listed on third-party sites important for AI visibility?

Depends on your market. Australian real estate has a developed directory ecosystem accounting for 25% of citations. Sydney dental has almost none, with 80% of citations going directly to provider websites. Check what third-party infrastructure exists in your specific market before investing.

Sources

  1. Cited Research, v2 cross-market query intent study, April 2026. 270 queries across 5 markets, 3 engines, 7 intent categories. 1,468 citations (deduplicated). Page types corrected 7 April 2026.
  2. Cited Research, Australian real estate 6-city experiment, March-April 2026. 324 queries across 6 cities, 3 engines. 2,315 citations.
  3. Cited Research, Phuket villa agency experiment, April 2026. 60 queries, 3 engines. 568 citations.
  4. Cited Research, Bali real estate audit data, 2026. 1,778 citations across 5 engines.
Lennart Vallo
Lennart Vallo
Founder, Cited

We built Cited because no one was measuring what AI engines actually recommend. Our methodology is public, our data is first-party, and we practise on ourselves before we advise clients.

More about Cited →