Deepfake it ’til you make it

1 Apr 2026

Freddie Eltringham and Oliver Simpson consider whether crisis communications teams need a new playbook to combat the complex threats of synthetic AI and deepfakes?

Synthetic AI continues to muddy the waters of an already murky information environments. A new frontier of deepfake threats means the role of communications teams in authentication, amplification and distribution cycles is all the more critical. They must manage uncertainty, maintain credibility and ensure that accurate information is what ultimately reaches and resides with audiences.

Overview of synthetic AI

When we refer to synthetic AI, we mean any content generated or altered by artificial intelligence. This can include images, video, audio or text. Advances in the sophistication of these generative tools has lead to hyper-convincing imagery, clone voices or entirely fabricated scenarios becoming accessible with a single prompt. “Synthetic personas”, more colloquially referred to as ‘bots’; accounts that behave like real people online to influence conversations and flood social media news feeds, are also a growing and complex source of reputational risk in the digital landscape. Crucially, many of these tools are now cheap, easily navigated, and have low barriers to entry, making it far easier to create and spread misleading content at scale.

Case studies

There is no shortage of case studies that show how quickly synthetic AI can spread across digital landscapes at scale. In March 2023, an AI-generated image of the Pope in a white Balenciaga-style coat appeared online and gained significant traction. While it was quickly exposed as fake, the image had reached millions within hours and, in this case, helped promote and rehabilitate a brand at a time of acute commercial challenge (remember the BDSM bear!). The case study was a harmless exemplification of how easily synthetic content can spread and have tangible impacts on brands.

However, deepfake attacks can pose far more serious and complex threats. In May 2024, a Hong Kong employee of global engineering consultancy Arup received a request appearing to be from the company CFO and, despite initial scepticism, joined a video call with what seemed to be a group of senior executives. In reality, every participant in the meeting was a deepfake. Convinced by the realism and authority of the call, the employee delivered a series of transfers to the attackers totalling $25 million.

The incident highlights not just the speed and complexity of reputational threats presented by synthetic media, but the rapid increase in its quality, leading to misinformation circulating widely before verification cycles can catch up, as well as novel ways in which organisations are exposed to fraud.

A one-size-fits-all approach to synthetic AI?

There is no one-size-fits-all response to the threats presented by synthetic AI. Tackling certain deepfake threats, such as fraud, will require a response not dissimilar to a traditional cyber-incident, meaning it can be managed through existing risk and crisis management protocols. As we know, these playbooks require rapidly convening forensic and legal teams, clear escalation processes and sign-off structures, and the maintenance of a structured cadence of internal and external communications. Communications teams must also make careful and deliberate choices about when not to engage with synthetical material, recognising that premature or unnecessary responses can amplify and inadvertently authenticate an attack.

However, hyper realistic imagery presents a uniquely potent reputational risk because of its exploitation of fundamental components of perception and memory. Consumers are predisposed to trusting what they can see, and unlike written information – which can be interrogated, contextualised and then corrected – synthetic imagery is accompanied by an immediate sense of proximity and emotional intensity. As we saw with the harmless Pope deepfake, a clearly fictional image can embed itself in public consciousness and have tangible consequences for brands – a less innocuous image could inflict far more enduring damage.

There is no one-size-fits-all response to the threats presented by synthetic AI. Tackling certain deepfake threats, such as fraud, will require a response not dissimilar to a traditional cyber-incident, meaning it can be managed through existing risk and crisis management protocols.

To intensify this risk, synthetic content spreads at immense speed: research this year from Scott Graffius found that the average half-life (lifespan) of a post on X is just 52 minutes. By the time verification takes place, the impact and damage may already be done. Additionally, there is often no clear source to challenge, no author to engage, and limited scope for legal or regulatory intervention to remove synthetic content once it has circulated.

As such, a purely reactive approach to synthetic threats is increasingly insufficient – crisis playbooks need to integrate greater consideration of proactive risk mitigation. To safeguard corporates against attacks, the embedding of stronger cybersecurity awareness and verification instincts across workforces is now an essential practice. More critically though, organisations must invest time and resources into consistent communication of authenticity to external audiences. This can include establishing recognisable and repeatable content patterns – verified channels, repeated visual identities, identifiable spokespersons or clear publishing protocols – to ensure audiences have an intuitive grasp of what legitimate communication looks and feels like. Over time, good practice can create a baseline of trust against which synthetic anomalies and attacks are more easily identifiable.

In summary, early detection, rapid escalation and clear reactive communications outputs remain absolutely vital to a successful response, however they are no longer sufficient in isolation. A positive outcome for an organisation will depend not only on the speed of response, but also, crucially, on the strength of foundations laid beforehand. Trusted communications patterns and credible voices are vital to determining whether audiences genuinely question what they see, rather than simply accepting an image at face value as they scroll past.