Treating narrative infections: what to do when AI attacks your reputation
27 Feb 2026
DRD Partnership teamed up with Matrix Chambers and Digitalis to share insights on how clients can deal with this fast-evolving area. Lou Isenegger reports.
Artificial intelligence (AI) has fundamentally altered the reputation landscape. At a breakfast event, Chaired by DRD Partner Jon McLeod, James Hann of Digitalis, Lorna Skinner KC of Matrix Chambers, and DRD Partnership’s Claire Davidson explored what this shift means in practice and what organisations can do about it.
The session advised on what to do when AI summaries attack a client’s reputation and set out practical steps for managing AI-driven reputational threats. It centred around one pressing question: in a world where algorithms increasingly mediate first impressions, who is really shaping your reputation and how much control do you retain?
AI as prediction, not understanding
James Hann opened the discussion by describing large language models (LLMs) as largely “black box” technologies. In his view, AI systems do not fundamentally understand what they process or produce. They operate through prediction, generating responses based on what is statistically most likely to be correct.
This architecture creates inherent risk, as these systems rely on probabilistic modelling rather than semantic comprehension, contextual errors and so-called “hallucinations” are structural features rather than anomalies. Their outputs were noted to be shaped by two principal inputs: (1) training data, the vast human generated datasets used to build and calibrate the model, and (2) run time data, the live predictive process through which the system selects the next most probable word.
Modern models can also conduct what are known as ‘fan out searches’: generating multiple related queries from a single prompt to refine their answers. In doing so, they tend to privilege highly visible and top ranked sources, often drawing disproportionately from the first page of search engine results. The result, therefore, is language that appears fluent and authoritative, yet is not grounded in genuine understanding of meaning, context or truth.
Are you outsourcing your reputation?
Claire Davidson focused on the implications for clients. With almost everything now online, from reviews and employee commentary to media coverage and archived material, much of it scraped and processed by AI systems, organisations must confront a simple reality. If they are not actively shaping their own narrative, they risk outsourcing their reputation to algorithms and potentially malicious actors.
While the flow of information cannot be stopped, it can be managed. First impressions are increasingly formed through AI-generated summaries. Without accurate, valuable and discoverable content of their own, organisations leave a vacuum that others may readily fill.
Attacks now manifest through scale, speed and specificity. Large volumes of highly targeted content can be produced quickly and at low cost, allowing a single defamatory allegation to spread rapidly and shape perception almost immediately. Fabricated material can be difficult to distinguish from genuine content, particularly when presented in credible formats.
The impact of this is significant. Legal and communications costs rise, journalists and advisers face heavier due diligence burdens, and the responsibility to disprove false claims often falls on the individual or organisation under attack.
While the flow of information cannot be stopped, it can be managed. First impressions are increasingly formed through AI-generated summaries. Without accurate, valuable and discoverable content of their own, organisations leave a vacuum that others may readily fill.
Data poisoning and malicious manipulation
James Hann also underlined the growing risk of data poisoning: the deliberate introduction of false or manipulative material intended to influence how a model responds. Direct injection into proprietary training datasets is complex and typically requires specialist expertise, but related techniques can be more accessible. This can enable actors to influence the material a model retrieves and prioritises, particularly in commercial disputes.
Because LLMs privilege highly visible and top ranked content, even a single fabricated source can distort outputs. For example, a false LinkedIn profile created impersonating an individual with little existing digital footprint may be indexed and incorporated into generated summaries, enabling malign actors to construct years of apparently credible and discoverable history. In the absence of authoritative counter content, this vacuum effect allows distortion to take hold and become self-reinforcing.
Courts, responsibility and the rise of experts
Lorna Skinner examined the legal implications of this shift. While forgery and falsification are not new phenomena, the scale and sophistication of AI-generated content introduces a distinct challenge. Defamation law traditionally depends on identifying both a publication and a publisher, a framework that does not always map neatly onto AI-generated outputs.
By contrast, data protection law focuses on the processing of personal data and may, in certain circumstances, offer alternative avenues for redress, particularly for individuals. Nonetheless, gaps remain between these regimes, leaving areas of uncertainty.
An increasing reliance on expert evidence is therefore likely. Dubbed ‘the rise of the expert’, courts may require specialist assistance to determine whether content was AI-generated, how it was produced and, critically, where legal responsibility lies. Questions of attribution, model design, source prominence and jurisdiction are becoming central.
Claire Davidson added that the challenge for advisory firms, such as DRD, is not solely legal but strategic. Effective response demands coordination across stakeholders and clear communication to reassure clients that investigations are progressing.
Treating narrative infections – what does treatment look like?
Diagnosing the narrative environment
James Hann outlined a dual approach. Organisations must first understand what a neutral searcher sees when a name or brand is entered into a search engine or AI tool. They must then actively surface and interrogate problematic narratives, trace their sources and use similar technologies to assess how and why they are being amplified.
Speakers warned against assuming transparency in AI cited sources. Algorithms remain difficult to interrogate, and once deployed, their underlying training data is rarely altered. Responses tend to focus on filtering outputs rather than reshaping the systems themselves.
Prevention over cure
For Claire Davidson, prevention is as important as cure. A crisis is not the time for original thinking. Organisations need structure and preparation. She outlined four core elements of response: monitor, assess, decide and act proactively, then repeat. This neutralising cycle, applied consistently and constructively, helps stabilise the narrative environment over time.
Shared stewardship in practice
Reputation management in the AI era requires shared stewardship. Legal, digital and communications teams must work together with clear ownership of risk, defined escalation procedures and strong digital hygiene. Clients and stakeholders should understand their potential adversaries and how coordinated networks can manufacture credibility at scale. Addressing this may require specialist forensic expertise and a long-term strategy rather than reactive suppression.