Is dark PR behind your crisis?
12 Jul 2024
The increasing threat of ‘dark PR’ tactics prompts the need to plan for an effective fightback, writes Michael Rose.
In November 2023, a group of Australian academics were forced to apologise to the ‘big four’ consultancy firms. This followed a submission they made to a parliamentary inquiry which included a series of case studies to support their argument that the sector needed reform. The problem? The case studies were AI-generated fakes, implicating these consultancies in scandals that never occurred.
Giving the benefit of the doubt to the academics, it could be argued that the mistake was unintentional and they offered an unreserved apology, demonstrating the very real need to verify AI-generated content. More troubling is the prevalence of actors purposely deploying such methods to implicate companies, organisations, individuals and even governments in scandals using convincing, but ultimately faked or highly misleading, evidence.
The growing sophistication of disinformation tools presents fresh reputational challenges, exploitable by those with ill-intent or a grievance. In the case of our own clients, we have witnessed recent examples of these tools being deployed, taking a number of forms. For example, producing fake, but convincing, documentation, such as contracts, emails, memos and reports. Other malicious actions have included using media in less reputable jurisdictions and setting up websites to make unfounded accusations, causing clients ‘Know Your Client’ (KYC) problems down the line. There have also been efforts to penetrate the security of organisations to access their most confidential and sensitive information.
Reputationally, these malicious and misleading actions have the potential to inflict existential damage on targets. Creating convincing fake or misleading material is no longer a painstaking, laborious, time intensive process. It can now be achieved with a few well-trained clicks of a button. Often, those so inclined simply contract it out to experts, readily available on the dark web, and await the results.
So, what does this look like in practice? The possibilities are restricted only by the creative limits of the malicious perpetrator. An example could be fake bank statements demonstrating ‘evidence’ of an executive embezzling funds. It could be a fake ‘top secret’ memo by a board member saying employees of a large multinational won’t be getting bonuses. It could be fake emails ‘demonstrating’ a desire to illegally discriminate against certain employees. The disturbing reality is that detecting and debunking these fakes is an increasingly difficult enterprise.
One potential consequence of this involves journalists working to increasingly tight deadlines, fearful of being beaten to a scoop, finding it increasingly hard to verify, against the clock, whether documentary evidence is legitimate or faked. In some instances, AI is the weapon to combat these fears, with powerful tools being developed to root out deepfakes. This is in addition to journalists (responsible ones at least) desire to use multiple sources to verify information. However, this is an arms race where they can’t and won’t get it right every time.
For organisations and individuals facing these threats, the outlook may seem depressing. However, the situation is not all doom and gloom and there is a varied toolkit available to those looking to combat these ‘dark arts’.
Michael Rose, Senior Associate - DRD Partnership
For organisations and individuals facing these threats, the outlook may seem depressing. However, the situation is not all doom and gloom and there is a varied toolkit available to those looking to combat these ‘dark arts’.
Legal action against the perpetrators and users of disinformation is one route. In many cases this will provide vital remediation to protect individuals and organisations in the long term from repeat attack. Although positive, conclusive outcomes may take some time, it is still essential to protect your reputation by showing a willingness and determination to punish those peddling misinformation against you. Norwich Pharmacal orders, requiring the disclosure of documentation, can be a legal tool that helps identify the use of fakes.
We recommend all of our clients have the right legal counsel onboarded as a precaution so resources can be mobilised quickly. If challenging material is causing immediate reputational harm finding a lawyer, checking conflicts and on-boarding wastes precious time.
For speedier results, it is vital to establish a rapid response capability. This will involve a clear decision-making process for key figures to determine the facts, agree a response and quickly rebut. Time is precious in avoiding a falsified, but damaging narrative, becoming established and ensure the doubt about their veracity is out there early on. Having media-specialist counsel on board and ready to engage publications on legal grounds is an important part of the toolkit to prevent stories based on disinformation taking root.
In-house teams, perhaps unused to these kinds of threats, need to be trained in how to identify them and deal with them rapidly. Existing channels managed and used by these teams need to be regularly assessed for their durability and suitability. Handling responsibilities must also assigned to individuals in anticipation of threats, not just once they have occurred.
The next step will be gathering evidence to counter misleading or faked claims. What documentary and circumstantial evidence can be presented to demonstrate that what is being alleged simply isn’t true? This evidence can then be used to background brief journalists or engage investors to show the real facts of the case. Good risk mapping as part of a clients crisis preparedness will also ensure that as many scenarios as is sensible have already been planned for.
Internal communications, especially for larger organisations spanning multiple sectors and geographies, will also be key. Employees first learning about malicious and misleading allegations from the newspaper will set the tone of their view and risks sewing doubt and confusion. A clear, strong message to internal audiences, consistent with messaging being deployed elsewhere, can help head off concerns.
It will also be vital to closely monitor media, social media and elsewhere online to spot the appearance/reappearance of faked allegations. A combination of comms and legal advisers should quickly swoop to engage publishers and have the allegations removed at the earliest opportunity. In the case of social media, this requires engaging directly with platforms such as X or Facebook. Take down remedies are available, such as flagging that hosting false information is in breach of a platform’s terms and conditions.
Without such an approach, targets risk fiction becoming fact and their reputations suffering very real damage for imagined wrongdoings. Planning now can mitigate the panic and stress of dealing with this situation in the moment.