The first read on you is now an AI model

Major LLMs are now embedded in the screening tools that compliance teams, investors, and counterparties use first. The opening view of your reputation is being formed by a model, not a person, and the rules for managing it are different.

By Olivia Owen, Client Partner
April 2026

Before an investor takes a position or a counterparty runs due diligence, they now consult an AI model. Major LLMs, including ChatGPT, are already embedded in core screening tools, supporting KYC checks inside Dow Jones Factiva, Lexis+, and World-Check, shaping initial research, and replacing traditional search for a growing number of users.

These models are becoming the "first read" on companies and individuals.

The opening view of your reputation is no longer just formed by a human skimming a results page. It is formed by a model, built on sources you didn't choose, and shaping the assumption everything else gets measured against.

An intermediary sits between you and the audiences you want to reach.

The first answers anchor what people believe, and corrections don't fully undo them. Research from Lewandowsky et al. shows that even when individuals encounter false claims accompanied by clear corrections, 20 to 50% of people continue to rely on the original false information.

Reputation management has previously been a media problem, built for readers who can be persuaded, or articles that can be retracted. Model reputation doesn't work that way, and three key features change the problem.

Models hallucinate. Large language models generate content as well as summarise, and what they generate sometimes includes plausible-sounding false claims about real people and real companies. In December 2024, Apple's AI pushed fabricated BBC news alerts to users. This is how predictive models work by design, and it is producing defamation cases in the United States, including Mark Walters v. OpenAI.

Responses change from one question to the next. The exact same question, asked twice, will return different answers. This means that checking a single output tells you little about how your organisation is being represented more widely. What can give you insight is the pattern across the responses and the models that shape the decisions stakeholders are making about you.

Information is sticky. A bad article can be corrected or legally challenged, but how those levers apply to model outputs is still being worked out in the courts. Models cite the sources they have learned to trust, so shifting what they say means changing what they read, not just what appears on search results pages. On its own, a correction on the search page doesn't change what the model has learned. The traditional reputation levers need to be chosen for how they move the model, not only how they move the press.

What can you do?

Reputation now has to be managed in the models. The work involves:

  • Knowing which questions matter, the ones your key stakeholders are asking.
  • Understanding how those questions are currently being answered across ChatGPT, Claude, Gemini, Perplexity, Google's AI Overviews, and Bing Copilot.
  • Shaping the sources those models rely on, for example your own website, trade press, and third-party media.

None of this is a one-off audit. Models keep retraining and the sources they pull from change, so what they say about you doesn't sit still either. Done properly, AI reputation work sits alongside media monitoring and stakeholder communications as a standing part of corporate affairs, something you check on regularly and act on when the picture isn't what it should be.

If you would like to understand what leading AI models and answer engines are saying about your organisation, or to start managing how AI represents you, please get in touch at [email protected].

Related News & Insights

What Managing Reputation in AI Actually Looks Like in Practice

Read more

The first read on you is now an AI model

Read more

Reputation & LLMs: How These Models Are Trained on Your Content

Read more

The New Frontier: KYC, GPTs & Online Reputation

Read more

Crisis Management in the Age of AI-Generated Content

Read more

Reputation & KYC: How Adverse Media Affects Banking Relationships

Read more

Protect your reputation in AI-Search & LLM

Manage high-stakes issues and build a digital reputation that creates competitive advantage

See how
Copyright © 2026 Dablam Ltd (company number 15115628) | All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram