DRD’s Kate Miller in Tatler: How to manage your online reputation in an AI era

7 Mar 2025

With sophisticated AI tools enabling unprecedented levels of disinformation, managing online reputation has become more complex than ever. Kate Miller, Partner at DRD Partnership, recently highlighted these concerns in a Tatler feature exploring how AI is revolutionising the threat landscape for high-profile individuals.

The AI-powered threat landscape

Having a strong and robust online profile is crucial in today’s digital world, particularly when it comes to protecting against malicious attacks and misinformation campaigns. Generative AI is making faked output ever-more convincing and difficult to identify. Even if material is established as fake, this process can take time, during which an individual or organisation may have been subjected to significant reputation damaging media and social media scrutiny.

In this article, Miller warns that social engineering through AI has become “increasingly easy” for those with malicious intent. She points out that “bad actors understand how to manipulate search engine results and exploit algorithm bias to manipulate social media algorithms. Fake engagement with likes, shares and comments, can all go to make false content appear more credible.”

This represents a significant shift in the nature of reputational threats. What once required sophisticated technical skills or significant resources can now be accomplished in moments with a few well-guided taps of the keyboard.

Iain Wilson, Managing Partner at Brett Wilson LLP, notes that there is huge scope for reputational harm, “AI-generated content can be particularly pernicious because false information can be seamlessly intertwined with true information. This indistinguishability gives misinformation credibility.”

Beyond financial damage

The implications extend well beyond relatively simple false narratives and AI-powered attacks can trigger serious financial and legal consequences. Polly Wilkins, Partner at Kobre & Kim highlighted how these attacks can go “beyond financial fraud, aiming to trigger sanctions by allowing competitors, disgruntled partners, or political adversaries to weaponise disinformation.”

The reputational damage from AI-generated disinformation can have lasting effects even after the content is proven false.

"The implications extend well beyond relatively simple false narratives and AI-powered attacks can trigger serious financial and legal consequences."

Vulnerability factors and protective measures

Miller emphasises that “several factors determine how vulnerable a person is to an attack, including their digital footprint, privacy settings and general awareness of online threats.” With AI tools becoming more sophisticated, she warns that “this threat isn’t going away and only shows the signs of becoming more challenging.”

For high-net-worth individuals and organisations, proactive management has become essential. Miller advises that people “should take steps now to strengthen privacy settings and minimise personal data exposure, as well as educating themselves on AI developments and disinformation techniques.”

The growing importance of professional management

With the rise of AI chatbots as information mediators, public perception will increasingly be viewed through the lens of AI chatbots, who draw on available online material. This technological shift makes reputation management more important than ever.

As we move further into 2025, a vigilant and proactive approach, combining legal and communications expertise, will be essential for those seeking to protect their reputations in an increasingly AI-mediated world.