While anxiety runs rampant over AI’s role in content production, Product Marketing Lead Gavin Dunaway notes it’s a powerful tool for enabling digital trust and safety
“I just don’t know where we’re going,” my video producer friend told me the other day over a margarita.
My friend has specialized in news throughout his career and currently works at a major American media company. His consternation stemmed from a half-day workshop examining the potential of AI for video content production. It was an unsettling affair, particularly after AI has been on the lips of media executives handing out pink slips like candy.
“We watched an AI-generated video where Donald Trump ate a live octopus while other former presidents watched,” he said after a sip. “Besides being a super weird concept, the video had all kinds of strange and disorienting glitches. No one would mistake it for the work of humans.”
But that didn’t give him any comfort. “How long until we can’t tell the difference? And then what’s my job as a producer?”
I was a little taken aback—I had read content producer fears about AI, but actually hearing them voiced by a friend made them alarmingly real. And it was particularly jarring as I explained to him how AI is a critical tool powering The Media Trust’s digital trust and safety solutions—from category analysis to malicious pattern detection.
Slippery Slope From Curation to Creation
In the Culture book series by Iain M. Banks, humans and AI entities live in harmony in an anarchic society where almost all life-supporting functions are run by super-powered AI “minds.” Their media and entertainment (besides IRL parties and orgies) are created on demand by the AI based on their desires.
I’m not sure about living harmoniously with AI drones, but on-demand, user-personalized content is not a stretch. Already publishers are pushing algorithmic content based on consumer interests—AI just takes this up a notch. Media operations like Insider and BuzzFeed began playing up their AI initiatives… right alongside announcements of editorial layoffs.
ChatGPT and its AI brethren are filled with misinformation and incorrect details.
So you see why my friend is worried: he’s concerned AI will replace him and assemble personalized newscasts on demand, grabbing footage and content from whatever sources are available. He’s lamented that standards for image quality have gone way down and they’re broadcasting smartphone-captured video on HDTVs. AI wouldn’t necessarily need to be as discerning or finicky as a well-trained human producer.
Which brings about a serious problem in this scenario: ChatGPT and its AI brethren are filled with misinformation and incorrect details. It can even be perilous: ChatGPT actually falsely accused a professor of sexual assault. Ask ChatGPT about yourself—you might be surprised what answers come back. I learned I was still serving as Editorial Director of AdMonsters in September 2021! (I was not.)
Fighting AI Fire With Fire
Despite disturbing videos of Furbies powered by ChatGPT promising world domination, no one will mistake current AI technology for SkyNet. What’s causing all the buzz is advanced machine learning, and its learning is being driven by the information we feed it.
Hence notions like “AI racial bias”—if mainly White people are feeding the AI, there’s a potential that their subconscious cultural biases will over-influence the machine learning and taint the output. (Another reason why diversity is good for business.) Hence also why bad actors can use AI to spread malicious schemes and misinformation at scale.
This is the immediate threat of AI as Steve Wozniak of Apple fame recently pointed out—it’s misuse by bad actors to harm consumers. Recognizing there’s no way to shove this beast back into Pandora’s box, Wozniak and government officials are calling for labeling of content created by AI (particularly political content). But have you ever seen a threat actor label their phishing redirect?
The ultimate irony is that contemporary AI is also a fantastic tool for identifying digital threats, scams, and misinformation—particularly if it’s been created using AI tools. You could say AI recognizes AI—the machine learning can decipher patterns in images and code, as well as manipulation.
Threat actors may be able to scale and speed the production of malicious wares via AI, but AI is also a key reason The Media Trust can shut down threats faster than ever. Fuelled by human intelligence, The Media Trust uses both proprietary and public AI tools millions of times every month to easily:
- Recognize patterns used to propagate malware, e.g., exploit kits, redirects, JavaScript injection, bot deployment, and more
- Adjust scanning profiles to elicit anomalous behavior
- Identify and categorize images, text, and sensitive ad content—across both display and video
Happy Disruptions?
My friend’s anxiety reminded me of my aunt, a video editor who specialized in analog tape at a major news organization. During the pivot to digital, my aunt found herself pigeonholed as the “tape person,” and late in her career fell into an archival role that wasn’t going to last long for an increasingly digital world.
AI is about to (sigh, the cliche) disrupt industries far and wide once again.
Just as we’ve all found some kind of equilibrium with the Internet, the cloud, programmatic auctions, and other technologies in our work lives, AI is about to (sigh, the cliche) disrupt industries far and wide once again. But we need to recognize that AI is a tool, not a replacement for humans (at least not yet). We need to find ways to leverage its power and integrate it into our day-to-day operations to drive efficiency and speed in a way that safeguards the consumer experience.
We can’t fear AI, and we really shouldn’t loathe it. But now may be the worst moment just because of the high level of anxiety. Embrace the uncertainty. Embrace the AI. Just don’t forget to make sure your efforts support digital trust and safety for consumers.