Four AI engines, 20 Australian suburbs, 746 distinct agent names returned. The engines agreed on thirteen. We audited five of those thirteen against five matched-suburb controls and one signal did almost all the work: RateMyAgent review depth. The winners held a median of 274 reviews. The controls held 35. Ted Pye runs Surry Hills (236 reviews, 164 sales last year). Betty Ockerlander runs Epping (306). Nick Tang runs Box Hill (314). Four engines, four review-deep specialists. No shortcut replaced the review pile, and no polished website rescued the agents who did not have one.

This is not a ranking of the best agents in Australia. It is a ranking by AI citation agreement. Olivia Porteous, Aaron Storey, and Juliet Chen are accomplished operators who happen to sit outside the narrow band of signals that current AI engines reward. The article is about what those signals are.

Last updated: April 2026

746 Distinct agent names returned
13 Named by all 4 engines
274 Median RMA reviews (winners)
35 Median RMA reviews (losers)
Key findings
  • Four AI engines × 20 Australian suburbs × 2 prompt families returned 746 distinct names. 13 were named by every engine.
  • We audited 5 consensus winners against 5 matched-suburb controls on 15 entity-footprint signals using public sources only.
  • Sharpest differential: RateMyAgent reviews at or above 100. Four of five winners cleared the bar; one of five controls did.
  • The median gap is eightfold. Winners: 274 RMA reviews. Controls: 35.
  • Four agents break the pattern in instructive ways. Olivia Porteous, Aaron Storey, Mack Hall, Juliet Chen. Each anomaly teaches a different lesson about how AI assembles an entity.
  • Perth is over-represented. Seven of the 10 identifiable consensus agents sit in Western Australia. Smaller markets concentrate entity signals on fewer names.

What we did

Two datasets, both collected on 21 April 2026. Four engines, twenty suburbs, two prompt families per suburb. 160 responses. 152 usable.

The parent study asked each engine (ChatGPT, Gemini, Perplexity, Grok) to recommend a real estate agent in a named Australian suburb, then to name the top agents in that suburb. We aggregated the names, deduplicated, and flagged every agent named by two or more engines. 167 agents (22.4%) were cross-engine. 579 (77.6%) appeared once. 13 were named by all four. The Australian hub article breaks the suburb-by-suburb distribution down.

The pilot audit then took five of those 13 consensus winners and paired each against a matched-suburb control: an agent named by exactly one engine for the same suburb. We scored both on 15 public entity-footprint signals. Personal domain. Agency profile page. RateMyAgent presence. Review count. LinkedIn. YouTube. Press mentions. Schema markup. We used public sources only, no scraping, no paid data.

Ten agents. One day. That is a pilot, not a population study.

Caveats we are honest about
  • Ten agents. One day. 20 suburbs. A pilot.
  • Perplexity hallucinated Vaucluse NSW to South Carolina. Grok hallucinated three Adelaide suburbs (Norwood, Marion, Salisbury) to US locations. OpenAI was the only engine that stayed 40/40 geographically accurate.
  • RateMyAgent is one platform. Agents with strong Domain or realestate.com.au profiles but a thin RMA presence are under-measured here.
  • Review depth correlates with sales volume, tenure, press coverage, suburb specialisation. Correlation, not causation.

The 13 agents AI agrees on

Across 20 suburbs and four engines, only 13 individual agents were named by every engine. Ten names are identifiable individuals. Three are fragmentary brand references (Knight Frank, Hooker Pakenham, Logan City) that we excluded from individual analysis.

Seven of the ten sit in Western Australia. Two in New South Wales. One in Victoria. That geographic skew matters. Smaller capital markets concentrate entity signals on fewer names, and Perth in particular throws up multiple consensus winners in the same suburb (Peppermint Grove returned Mack Hall, Jody Fewster, and Vivien Yap as cross-engine picks). The Perth spoke unpacks that concentration.

The 5 consensus winners we audited
Agent Agency Suburb Sales 12mo RMA reviews
Ted PyeBelle Property Surry HillsSurry Hills NSW164236
Betty OckerlanderMcGrath EppingEpping NSW79306
Nick TangNick Tang PropertyBox Hill VIC28314
Mack HallMack Hall Real Estate / Knight FrankPeppermint Grove WA127 (agency)0 (individual)
Nigel RossRoss RealtyMorley WA75274

Range your eye across that column. 236. 306. 314. 0. 274. One outlier, four heavyweights. We come back to Mack Hall later.

What separates winners from the rest

We scored all ten agents on 15 signals. Most signals were binary (does this exist or not), a few were threshold-gated (reviews at or above 100, press mentions at three or more in the last 24 months). The pattern that held up under pressure was narrower than we expected.

Signal differential: 5 winners vs 5 controls
Signal Winners (X/5) Controls (X/5) Differential
Personal domain4/52/5+2
Structured bio with photo5/55/50 (table stakes)
Agency profile page5/55/50 (table stakes)
RateMyAgent profile exists5/55/50 (table stakes)
RMA reviews at or above 1004/51/5+3 strongest
Complete REA/Domain profile5/52/5 full, 3/5 partial+2
Suburb specialisation stated5/54/5+1
Press mentions (3+ in 24mo)2/50/5+2 at threshold
Own-branded YouTube channel2/51/5 partial+1
LinkedIn exists3/52/5muddy
Wikipedia/Wikidata0/50/50

Four signals produced meaningful separation. Review count at or above 100 was strongest. Complete realestate.com.au and Domain profiles came next. Press mentions at the three-or-more threshold came third. Personal domain came fourth but is muddier because Aaron Storey has a personal domain (whyaaron.com) and was still outside AI consensus. Domain plus review depth is where the correlation holds.

Most striking: the table-stakes. Every agent, winner or loser, had a bio with a photo and a RateMyAgent profile. Those are the ticket to enter, not the ticket to win.

AI picks the agent with the deepest review profile in each suburb, not the one with the slickest website.

The RateMyAgent review-depth threshold

Count them. Four of our five winners had 236 or more RateMyAgent reviews. Betty Ockerlander had 306. Nick Tang had 314. Ted Pye had 236. Nigel Ross had 274. Mack Hall had zero as an individual but sits on an agency-level review pool that dwarfs most competitors (we come back to this).

On the control side: Conrad Vass (9 RMA reviews). Olivia Porteous (25). Christian Chan (35). Aaron Storey (98, the highest control). Juliet Chen (143, RMA Carlingford Agent of the Year 2021). Only one crossed 100.

Median review gap
274 vs 35
Winners vs controls, RateMyAgent reviews
Eight times more reviews. The sharpest single differential in the audit, and the only signal that cleared both the winner bar (4/5) and the loser bar (1/5) strictly.

Why reviews and not sales? Both correlate. Winners moved 28 to 164 properties in the last twelve months. Controls moved 3 to 47. Sales volume is a directional signal. But transaction counts are not equally visible to AI engines. A review is text. Each RateMyAgent review carries a star rating and a short paragraph of prose that names the agent. Reviews are machine-readable entity signal in a way that sales figures on an agency dashboard are not.

Count your RateMyAgent reviews right now. If the number is under 100 and you are in a mid-competition metro suburb, you are almost certainly invisible to AI consensus for that suburb. Earn more reviews honestly. There is no shortcut. Trying to game the platform is a career-ending mistake, and it does not work anyway because AI consensus requires multi-platform signal that gamed reviews cannot produce.

One more wrinkle from the audit. The gap between 100 and 274 reviews matters less than the gap between 35 and 100. The 100-review threshold appears to be the door; everything above it is ordering inside a shortlist that engines already trust. Juliet Chen crossed the threshold at 143. She was still only named by one engine because Betty's 306 dominated the suburb. Crossing 100 is a necessary condition. It is not the end of the work.

Why a polished brand cannot rescue you

Olivia Porteous has a personal website that would be the envy of any agent. Clear luxury positioning. Multiple suburb pages. Instagram, Facebook video, LinkedIn. Her own site describes her as "WA's Most Sought After Luxury Real Estate Agent." She has 25 RateMyAgent reviews and was named by one engine out of four.

Her suburb is Peppermint Grove. Mack Hall, Jody Fewster, and Vivien Yap already own it in AI consensus. Every engine names all three. Every one of them has the review depth Olivia does not yet.

That is the trap. Peppermint Grove is a small, saturated luxury market. AI engines consolidate on the top three review-deep names per suburb and do not appear to surface a fourth regardless of brand polish. A website signals quality to a human. It does not signal entity-level authority to a language model that has already assembled its Peppermint Grove shortlist from other signals.

What this means

In a saturated suburb with clear AI consensus, a polished personal brand alone does not break through. The top three are locked in. The way out is either a different primary suburb where AI consensus has not crystallised yet, or the years of review accumulation it takes to displace an incumbent.

None of this makes Olivia a weaker agent.

Why suburb-dominance beats total credentials

Does a 2,000-sale career beat a 274-review suburb specialist? On paper, yes. In AI, no.

Aaron Storey runs Beaucott Property. Career sales over 2,000. 98 RateMyAgent reviews. 35 sales in the last twelve months. A personal domain at whyaaron.com. We queried all four engines for Morley agents. One named him.

Geography explains why. Aaron's primary suburb is Bayswater. Morley appears in his service area but he is not the #1 Morley specialist. Nigel Ross is. Ross Realty is a Morley-headquartered family firm with 274 RMA reviews and 75 sales in the last twelve months. AI engines read suburb-specific signals as "this is the Morley person" and the signals point to Nigel.

Aaron has the review depth (almost). He has the career numbers. What he does not have is suburb-specific entity strength for Morley. Being good everywhere loses to being the name in one suburb.

When your name is the brand

Mack Hall has zero individual RateMyAgent reviews. He was named by all four engines for Peppermint Grove. This should not happen under our review-depth thesis. It does. Why?

He founded Mack Hall Real Estate in 1994. The agency is now in association with Knight Frank. Mack Hall Real Estate (the agency) has a large review pool. Mack Hall (the person) has individual transaction stats on Domain, including a record $14.5m sale. The name of the agency is the name of the person, and both are Peppermint Grove's most consistent top result across property portals and press coverage.

AI engines appear to treat "Mack Hall Real Estate" as a person-entity rather than a distinct business entity, because the linguistic pattern matches an agent profile and the transaction data matches a single practitioner. This is the person-brand fusion that our cross-market local business study picked up on in different verticals: when the business name is the practitioner name, the engine cannot tell them apart and treats them as one entity.

Two related mechanics compound this. First, agency-level reviews count toward the person's perceived track record when the person-name and agency-name are identical. A prospective client searching "Mack Hall" lands on agency pages with review counts in the hundreds, and the engine reads those as his individual standing. Second, press coverage of high-profile Peppermint Grove sales (the $14.5m sale is the headline example) tags him specifically, not the franchise, which reinforces the individual-entity reading even further.

Mack Hall is not a replicable playbook. You would need thirty years. You would need a name that carries market recognition on its own. You would need a stable agency where the name is the brand, sustained across three decades. Most agents reading this cannot replicate it.

The outlier explains itself. That is the point.

Why #2 loses to #1 in AI

Juliet Chen was RateMyAgent Carlingford Agent of the Year 2021. 143 RMA reviews. Solid across every signal in our audit. Named by one engine out of four for Epping. The engine was OpenAI.

Betty Ockerlander has 306 RMA reviews and 79 sales in the last twelve months. She dominates Epping.

From the data

Juliet is not a weak operator. A suburb-level industry award is substantial recognition. But AI consensus for Epping locks on Betty. Juliet appears on the one engine where content and editorial signal matter most.

Most agents reading this are #2 or #3 in their suburb, not #1. Review depth, a legitimate award, a full realestate.com.au profile: all present. Engines still default to the incumbent because the incumbent's review and transaction signal is further ahead.

Being second-most-reviewed is functionally invisible to AI. Not absolutely. Twenty-five per cent of responses might still name you (OpenAI's share in Juliet's case). But the consensus layer we are measuring requires being named by all four engines.

What this means for agents

The pilot is not a formula. It is a direction. Five takeaways come out of it sharp enough to act on this quarter, and the fifth one is the cheapest.

1. Pick your suburb and be the deepest there. Suburb specialisation is the load-bearing signal. Four of our five winners are named in one specific suburb. If you work across five suburbs, you will split your review pool five ways and win none of them. Concentrate.

2. Treat RateMyAgent review depth as an entity metric, not a vanity metric. The median gap between our winners and controls was 274 to 35 reviews. If you are under 100, the rest of this article does not help you until you close that gap. Reviews are earned over quarters and years. Start now.

3. Complete the profile you already have. Five of five winners had full realestate.com.au and Domain profiles. Two of five losers did. Complete profiles include transaction history, suburb specialisation, photos, full bio, and current listings. Incomplete profiles leak entity signal.

4. Your personal website is necessary but not sufficient. Four of five winners had a personal domain. So did two of five losers. A website does not cause AI consensus. The agents with both a website and review depth win. The agents with only a website do not.

5. Check what AI actually says about you before you change anything. The cheapest first step is measurement. Run a free instant check of your own suburb across the four major engines. If you are named by zero or one, the gap is review depth and suburb concentration. If you are named by two or three, the gap is the incumbent. Different problem, different plan.

Our own methodology is open. The Sydney, Melbourne, and Perth spokes show how the pattern plays at a city level.

What we don't know yet

Ten agents is a pilot. The parent study has 13 named-by-all-four agents total; we audited five. A larger audit would strengthen or weaken the review-depth thesis. We are planning one.

Perth is over-represented in the consensus sample. Seven of the 10 identifiable consensus agents sit in WA. Smaller Australian property markets may simply concentrate entity signal on fewer names than Sydney or Melbourne do, and that concentration may be what is driving the consensus pattern rather than anything universal about agent entity signals. We do not know yet.

Mack Hall is a live outlier. His individual review count is zero, and he was named by all four engines. We added a "combined agent-plus-agency review pool" refinement to our audit rubric but could not fully quantify it in this pilot. The person-brand fusion pattern needs more cases before we can call it a rule.

One day of data. 21 April 2026. If we run the same probe in July, or in October, the thirteen consensus names might rotate. They might not. We do not know yet, and the only way to find out is to run the probe again at a regular cadence. That is the next piece of work.

Review depth may be a proxy. High RMA reviews correlate with long careers and suburb specialisation. We cannot isolate which component drives AI consensus from ten agents alone. That is the next study.

Australia only. Do not generalise to other markets without replication.

FAQ

Why does AI recommend the same real estate agents across engines?

Because four specific signals compound. Agents named by all four engines in our audit had a median of 274 RateMyAgent reviews, complete realestate.com.au and Domain profiles, clear suburb specialisation, and multiple press mentions over the past 24 months. The individual signals are public and verifiable. The consensus across engines is what compounds them into a recommendation shortlist.

How many RateMyAgent reviews does an agent need to be cited by AI?

In our pilot the threshold was 100 reviews. Four of five winners cleared it (Betty 306, Nick 314, Ted 236, Nigel 274). One of five controls did (Juliet Chen at 143). Winners held a median of 274. The gap between 35 and 100 mattered more than the gap between 100 and 300.

Can a great personal website get a real estate agent cited by AI?

Not on its own. Olivia Porteous has a personal website with clear luxury positioning and active social media. She was named by one engine of four. Her suburb (Peppermint Grove) is already locked on three review-deep incumbents. A website without review depth does not displace an incumbent.

Why are Perth real estate agents over-represented in AI citations?

Six of the 13 consensus agents sit in Western Australia, including three in Peppermint Grove alone. Our working explanation is that smaller state property markets concentrate entity signals on fewer names: fewer competing agents and tighter review pools, with property press that focuses on a shorter list of practitioners. We cannot yet say whether this is a structural pattern or a snapshot artefact. A longitudinal re-test would tell us.

Does ChatGPT use different signals than Gemini or Perplexity for real estate agents?

Yes. ChatGPT was the only engine that stayed geographically accurate across all 40 suburb queries. Grok hallucinated three Adelaide suburbs to US locations. Perplexity sent Vaucluse NSW to South Carolina. Across 20 suburbs, the four engines agreed on only 13 agents. See our cross-market engine study for the full engine-by-engine breakdown.

Is our 10-agent sample statistically representative?

No. It is a pilot. We plan to expand it.

Sources

  1. Cited Research, Australian real estate agent AI study, 21 April 2026. Four AI engines (ChatGPT, Gemini, Perplexity, Grok) × 20 Australian suburbs × 2 prompt families = 160 responses. 152 usable. 746 distinct names returned. 13 named by all four engines.
  2. Cited Research, Entity-footprint pilot audit, 21 April 2026. Five AI-consensus agents plus five matched-suburb controls scored on 15 entity signals using public sources only (personal websites, agency profiles, RateMyAgent, Domain, realestate.com.au, LinkedIn, Google Business Profile, press mentions).
  3. Cited Research, Australian real estate 6-city experiment, March-April 2026. 324 queries across 6 cities. Background context.
  4. Cited Research, Cross-market local business study, April 2026. The two-systems finding (recommendation versus explanation citations) referenced in the Mack Hall section.
Lennart Vallo
Lennart Vallo
Founder, Cited

We built Cited because no one was measuring what AI engines actually recommend. Our methodology is public, our data is first-party, and we practise on ourselves before we advise clients.

More about Cited →