What Managing Reputation in AI Actually Looks Like in Practice

The rules for managing how an organisation is represented in AI models are different from anything that came before. Getting future leaders to understand that early is where the work begins - and it is exactly what our partnership with FHWien der WKW is built around.

By Aurora Petzetakis, Business Development Manager

April 2026

Aydar Muslimov, Research & Intelligence Manager at Dablam, and I recently spent two days at FHWien der WKW with 65 Executive Management Master's students as part of the Project Portfolio Management module led by Dr. Diana Muslimova.

The premise behind the collaboration is one that more organisations are beginning to confront. Large language models such as ChatGPT, Gemini, and Perplexity are now embedded in the due diligence and screening workflows of investors, compliance teams, and counterparties. What those models say about an organisation is no longer incidental to how it is perceived - it is often the first input that shapes everything that follows. Managing that representation has become a material business concern, and most governance structures have not kept pace with it.

The session introduced students to how Dablam approaches that problem in practice. Twelve teams worked through a tailored strategic challenge built from their prior work on a Medical AI case, each asked to map the reputational exposure, stress-test two response approaches, and defend a recommended course of action. The brief was structured the same way a client engagement is - understand the landscape, identify where the risk sits, and build a response that holds over time rather than at a single point.

That framing is central to why the partnership sits within a project portfolio management module. Reputation in AI models is not a crisis communications question. It is a risk management discipline - one that requires continuous monitoring and defined ownership.

The executives who will be accountable for how their organisations are represented in AI are being educated now, and the analytical tools required - risk identification, stakeholder mapping, workstream governance - are precisely those this programme develops.

What can you do?

The starting point is understanding what LLMs are currently saying about your organisation, not through a single query but as a pattern across the models your key stakeholders are using. From there, the work involves building and sustaining the sources those models draw from, and maintaining a review cadence that reflects how frequently outputs change. 

If you would like to understand how LLMs are representing your organisation, or to start building the function to manage it, please get in touch at [email protected].

Related News & Insights

What Managing Reputation in AI Actually Looks Like in Practice

Read more

The first read on you is now an AI model

Read more

Reputation & LLMs: How These Models Are Trained on Your Content

Read more

The New Frontier: KYC, GPTs & Online Reputation

Read more

Crisis Management in the Age of AI-Generated Content

Read more

Reputation & KYC: How Adverse Media Affects Banking Relationships

Read more

Protect your reputation in AI-Search & LLM

Manage high-stakes issues and build a digital reputation that creates competitive advantage

See how
Copyright © 2026 Dablam Ltd (company number 15115628) | All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram