AI for communicators: What’s new and what’s next
Including the latest news on risk, regulation and workforce changes.
It’s been one year since generative AI exploded onto the scene. And lest you thought this would be a flash in the pan, the technology is growing at a dizzying speed, gaining new uses and changing how we live and work.
Over just the past week, Elon Musk rolled out an interesting new chatbot he calls “Grok,” which is meant to be a more irreverent cousin to ChatGPT. How funny or “rebellious” it actually is, we leave to you.
Meanwhile, IBM is putting serious cash into finding the next big thing in AI, dedicating $500 million to investing in AI startups.
And OpenAI is continuing to up its game and roll out new features, including the ability to create your own custom AI bots. Search Engine Journal reported that the newly-released ChatGPT Turbo can now process 300 pages of text at one time and also offers insight into the world up to April 2023. It also gives you the ability to create your own custom chatbot. This could be a great middle path for organizations too small to develop their own bot in-house but who want the robustness of a custom tool.
No, generative AI isn’t going anywhere anytime soon. Let’s find out what’s new this week that will impact your own communications practice.
The latest in AI regulation
We’ve reported in the past on how most support federal AI regulation, including some of the tech industry’s biggest leaders.
But that doesn’t mean all litigation is moving in support of human authorship. To the contrary, California federal judge Vince Chhabria ruled last week that he would dismiss part of a copyright lawsuit filed by comedian Sarah Silverman and other authors against Meta, focused on its Llama AI folding their work into its learning models, due to a lack of understanding how Llama misused their intellectual property.
“I understand your core theory,” Chhabria told attorneys for the authors according to Reuters. “Your remaining theories of liability I don’t understand even a little bit.”
While the judge will give Silverman and others the option to resubmit their claim, the ruling highlights the knowledge gap and lack of transparency between how these tools scrape information and how those outside of the field understand its workings.
In Washington, however, regulatory discussions around AI are moving at a quicker pace. This week, the FTC submitted a comment to the U.S. Copyright Office that emphasizes the FTC’s concerns about how AI will affect competition and consumer protection.
“The manner in which companies are developing and releasing generative AI tools and other AI products . . . raises concerns about potential harm to consumers, workers, and small businesses,” the comment reads.
“The FTC has been exploring the risks associated with AI use, including violations of consumers’ privacy, automation of discrimination and bias, and turbocharging of deceptive practices, imposters schemes and other types of scams.”
Deepfakes, malware and racial bias
Here’s the part where we show you all the scary ways the Bad Guys are using AI. Or even that the Good Guys are using it and producing unintended consequences.
Scammers are using the promise of AI technology to spread malware, in a new twist on a gambit that’s as old as the internet itself. Reuters reported that scammers are offering downloads of Google’s Bard AI. The problem, of course, is that Bard isn’t a download – it’s available right on the web. Those unlucky enough to download the file will find their social media accounts stolen and repurposed by spammers. Google is suing, but the defendants are currently anonymous, calling into question just how much the suit will help.
Meanwhile, AI experts are still incredibly worried about the use of AI to create undetectable fake content, ranging from videos to images. By one estimate, 90% of all content on the internet could be AI-generated by 2025, Axios reported.
That’s just over one year away.
Now, content being generated by AI isn’t inherently a bad thing. The problem is when you can’t tell what’s real from what’s artificial. The technology is already able to mimic reality with such precision that even leading AI minds can’t tell the difference. We can certainly expect AI-led manipulation to play a major role in the 2024 U.S. presidential election.
However, there are some tools that can help prevent the creation of deepfakes in the first place, particularly where audio is concerned. NPR reported on a tool that creates a digital distortion over human voice recordings. People can still hear the clips, but it renders AI systems unable to create a good copy. While the tech is new, it does generate a ray of hope in a bleak landscape for the truth.
Finally, former President Barack Obama is raising questions about the misuse of AI against people of color, particularly in policing. At a recent AI summit, Obama expressed optimism about new regulations implemented by his former running mate Joe Biden, but also noted the “big risks” as AI algorithms can often perpetuate racism, ableism, sexism and other issues inherent in their human creators. It’s an important note for communicators to keep in mind: AI models are as flawed as the people who create them. We must act with empathy and a diversity mindset to reduce harm.
The “doing” phase
We aren’t here just to give you bad news. There are also a lot of genuinely positive uses for AI that smart people are dreaming up that could change the way we all live and work. Do they all carry potential downsides? Naturally. But they can also spark creativity and free up humans for higher-level work.
For instance, the New York Times reports that soon generative AI will be able to do more than just recommend an itinerary for your next trip – it could be able to book airfare and make reservations for you. This “doing” phase of AI could change everything, making AI genuine personal assistants rather than just a smart Google search.
“If OpenAI is right, we may be transitioning to a world in which A.I.s are less our creative partners than silicon-based extensions of us — artificial satellite brains that can move throughout the world, gathering information and taking actions on our behalf,” the Times’ Kevin Roose wrote.
A recent test recently pushed this idea to its current practical limit as AI fully negotiated a contract with another AI – no humans involved, save for the signature at the end. CNBC reported that the AI worked through issues surrounding a standard NDA. Here’s how it worked:
Luminance’s software starts by highlighting contentious clauses in red. Those clauses are then changed to something more suitable, and the AI keeps a log of changes made throughout the course of its progress on the side. The AI takes into account companies’ preferences on how they normally negotiate contracts.
For example, the NDA suggests a six-year term for the contract. But that’s against Luminance’s policy. The AI acknowledges this, then automatically redrafts it to insert a three-year term for the agreement instead.
That’s a lot of trust to place in AI, obviously. But it shows what could be possible in just a short time. Imagine having AI review your social media posts for legal compliance rather than waiting for counsel to get back to you.
In a move that’s both neat and potentially terrifying for communicators, AI is being used to analyze minute changes in an executive’s demeanor while speaking that could indicate nerves or larger problems than they’re letting on. A tool called Speech Craft Analytics can analyze audio recordings for changes in pitch, volume, use of filler words and other clues humans may miss, the Financial Times reported.
So you may soon be adding voice coaching to your media relations training, lest you be caught by a too-smart AI.
AI and the workforce
Meanwhile, it’s also worth considering how the deal that the SAG-AFTRA actor’s union struck to end its 118-day-long strike over, among other things, clear protections against AI replacing actors and extras.
Going even further than the WGA protections that ended the writer’s strike in September, SAG’s agreement holds implications for workforces outside of the entertainment sector, too.
Wired’s Alex Winter reports:
The SAG deal is similar to the DGA and WGA deals in that it demands protections for any instance where machine-learning tools are used to manipulate or exploit their work. All three unions have claimed their AI agreements are “historic” and “protective,” but whether one agrees with that or not, these deals function as important guideposts. AI doesn’t just posit a threat to writers and actors—it has ramifications for workers in all fields, creative or otherwise.
The absence of enforceable laws that would shackle Big Tech doesn’t make these deals a toothless compromise—far from it. There is great value in a labor force firmly demanding its terms be codified in a contract. The studios can find loopholes around some of that language if they choose, as they have in the past, but they will then be in breach of their agreed contract and will face publicly shaming lawsuits by influential and beloved artists and the potential of another lengthy and costly strike.
In the absence of federal regulations, who should oversee the composition of internal guidelines and practices that uphold expectations between businesses and their workforces?
That may be answered, as the role of the Chief AI Officer (CAIO) is on the rise. According to new research from Foundry, 11% of midsize to large organizations already have a CAIO, while another 21% of organizations are actively searching for the right candidate.
At businesses that don’t have a dedicated CAIO on the horizon, meanwhile, communicators should embrace the opportunity to become early adopters not only of the tools, but of the internal guidelines and governance practices that will protect jobs and corporate reputation.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.
Justin Joffe is the editorial director editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.