HomeIndiaNewsroomNavigating Dark Waters: Balancing AI’s Risk & Reward for Public Relations
Facebook iconLinkedIn iconTwitter icon

Navigating Dark Waters: Balancing AI’s Risk & Reward for Public RelationsDecember 20, 2023

Generative AI – like nearly every technology before it – promises many benefits, but equally holds several dangers. But for high-stakes communications programmes, the perils of deepfakes, ‘hallucinations’, copyright violations, data leaks and bias can easily scare even the most techno-optimistic PR and comms professionals.

This blog post dives into those murky waters, not to scare you with the risks, but to equip you with knowledge and strategies to navigate them. Rather than being the treacherous shoal threatening to sink our reputation, Generative AI could be the wind in our sails, pushing us ever onward.

Deepfakes and Coordinated Inauthentic Behavior (CIB)

Deepfakes are literally the most visible risk of generative AI (GAI). We have already seen many attempts where deepfake audio & videos caused public, political or business harm. Deepfakes are often used in conjunction with ‘Coordinated Inauthentic Behaviour’. CIB is the use of bots and/or genuine social media accounts to spread disinformation and misinformation in the hope of influencing certain outcomes.

Humans are terrible at spotting deepfakes and supposed ‘AI detection’ tools have proven no better.

Solution: PR agencies and communications professionals need a multi-pronged approach to combat deepfakes and CIB.

  • Communicators can embed digital ‘watermarks’ on official content such as text, photos, audio, and video. These watermarks range from invisible bits of code to digital signatures to visible logos. At the least, such watermarks let stakeholders know what they can trust, even if they do not know what to distrust.
  • PR teams could also include deepfake scenarios during crisis management simulations. Such practice will help comms teams respond faster and more comprehensively during a real crisis.
  • At BCW, we’ve launched Decipher. This service uses ‘cognitive AI’ to assess how impactful any content might be to specific audiences. Decipher uses AI to test the content’s ‘believability’ & ‘virality’ for different audience personas and gives it a ‘Potential for Impact’ score. When faced with disinformation or a crisis, Decipher can help communicators make informed decisions about how to respond, that are rooted in data.

The Bias of Data

AI systems are only as ‘neutral’ as their training data. For example, most language models are trained primarily on English text. So, attempts to generate a nuanced Hindi poem would lack cultural subtleties, even if the model was equipped with a reliable translation engine.

Solution: Learn about what data your tools were trained on, and therefore what biases might creep into the output. Proactively review all output to mitigate or remove such biases.

Copyright Infringements

Many GAI models have unfortunately use unlicensed or copyrighted data and content to train their algorithms. Some have even been sued for this practice. Worse, it is still not clear who owns the copyright to GAI output: the person who created the input content, the programmers who developed the model, or the person prompting the model.

Solution: Finally, when using GAI to create content for commercial use, make sure your vendor clarifies whether copyrighted data was used during training, and who will own copyrights to content generated by the model.

Some image generation models such as Adobe’s Firefly were trained using their own image bank. Others such as Bria are even able to tell you exactly what training images were used when generating the image you prompted for.

Data security & confidentiality

Almost all publicly accessible GAI tool (e.g., Bard & ChatGPT) use your inputs to continue training its algorithms. In other words, if you enter confidential text or upload confidential documents to these tools, your information is at risk of leaking to another user.

Solution: The safest way to use GAI models is to deploy them on a private or ‘ringfenced’ enterprise server. That way your input data and resulting output never leave your servers.

The Creative Conundrum

An internal study at the consulting firm BCG found that when using OpenAI’s GPT-4, performance rose by 40%, and that 90% of participants produced higher-quality work. These are game-changing numbers. However, the same study warned that the number of creative solutions produced by a team of GPT-4 users was 41% lower compared to a team that used only human brainpower. In other words, non-GPT-4 users produced more unique ideas.

Solution: GAI can be a wonderful productivity enhancer when used for the right tasks (content generation, rewriting content, summarizing, or analysing content, etc.). But using it the wrong way could sacrifice much of the wonderful creativity that makes us uniquely human.

The hazards of deepfakes, biased data, copyright infringements, and other security risks remind us that a tool as potent as AI requires responsible handling.

Yet, the power to automate tasks, streamline processes, generate vast quantities of content instantly, and analyze data at lightning speed can free us to focus on what truly matters for communicators: crafting compelling stories and building enduring relationships with our stakeholders. At BCW, our suite of AI-powered tools from predictive analytics to generative AI, can help your communications programmes sail stronger in 2024.

Authored by: Pierre Fitter, Head – Senior Director – The Hub, BCW India Group.