Here’s some old news: artificial intelligence for content creation. Remember when the Washington Post started using Heliograf to report on local high school football games? That was in 2017.
Three years before that, the Associated Press announced how it was using Automated Insights to release quarterly earnings reports, touting that the system increased coverage with fewer errors — all without eliminating a single job.
It’s such old news that the Brafton blog had inklings of AI all the way back in 2017 as well — a whole five years before the ChatGPT boom in 2022.
But with AI being the in-thing all this time later, there are bound to be some bad-apple examples of how not to approach using it for content creation. Tea, anyone?
The Sports Illustrated Mishap
In 2023, one story stood out as a textbook example of the wrong way to use artificial intelligence — the complete opposite approach to how Brafton and many other organizations and experts recommend. If you didn’t hear about this one when the news broke, here’s a brief recap of what went down:
- Futurism published this article accusing Sports Illustrated of using AI-generated content — here comes the kicker — attributed to fake, AI-generated writer profiles (headshots and author bios included).
- Then, Sports Illustrated’s publisher, The Arena Group, responded by saying the content was provided by a third-party company that had assured them “all of the articles in question were written and edited by humans.”
- But, they also said that some of the content was published under a “pen or pseudo names in certain articles to protect author privacy.” The spokesperson for The Arena Group noted that they “strongly condemned” the practice and ended their partnership with that provider.
- Finally, Sports Illustrated removed all the content that had been attributed to the fake profiles from its site.
This story is fascinating for lots of reasons, but a handful of key observations stand out:
- Someone, somewhere, wasn’t honest about where the content came from.
- The Arena Group maybe didn’t vet their content vendor correctly — or at all — and just took their word as truth.
- In their rush to deny responsibility, The Arena Group wound up condemning a totally acceptable practice: using pseudonyms.
- No one really got away with it.
Lying to readers is never acceptable, and had The Arena Group approached this situation differently, they wouldn’t have needed to. But what does an optimal approach look like — especially now that generative AI is even better than it was when this kerfuffle shook out?
How To Mitigate AI Backlash In Content Creation
Luckily, it’s actually pretty simple to leverage AI tools without being completely dishonest with your audience. Not only is it easy, it’s an increasingly acceptable practice. Even just a year ago, when the above example happened, AI was still raising a ton of questions and I’d bet more than a few folks were hesitant about adopting it.
I’ve shared this stat before, but even at the very beginning of 2024, generative AI adoption shot way up from 33% to 65% — just after this story made headlines. So, it didn’t do enough to scare businesses away from using it.
Vet Your Vendors (And Vendors, Speak Up)
Beyond marketing, vetting vendors is a standard across every industry. At a certain point, pointing fingers just doesn’t cut it — especially if you’re a majorly successful, recognizable organization and household name. If you’re working with a content agency, do your due diligence. Ask them about AI, if they use it when writing content, how they use it and what to expect of their processes.
Ideally, agencies would be transparent about this upfront, but in this case, that wasn’t the case — allegedly. Still, The Arena Group and Sports Illustrated should have raised questions about AI if they suspected it was in play. Surely there had to have been signs before the content went live.
Make Human Intervention a Requirement
For every piece of AI content you produce, human intervention needs to happen — and ideally at various steps of the process. Articles should be fact-checked and proofread, always — AI-created or not — but especially for the former.
Blindly generating and publishing content is a recipe for disaster. Heed the warnings most reputable AI models provide: “ChatGPT can make mistakes.”
Make It Clear That AI Contributed to Your Content
AI disclosures, AI disclosures, AI disclosures. There are lots of different ways to let readers know that what they’re digesting, whether in part or in full, was produced with the help of AI. While an overwhelming majority of business leaders agree that disclosing the use of AI is important and necessary, it’s not a unanimous opinion. And since there is currently no legislation that mandates such disclosures, not everyone will do it.
Still, the adage rings truer than ever in the wake of the Sports Illustrated scandal: Honesty is the best policy. You can do this through a simple byline, an author’s note, a short appendage at the bottom of a blog or something similar.
Now that AI is more widely accepted, you probably won’t get many side eyes if you’re honest about how you’re using it — but you could get tons if you’re opaque or outright untruthful.
Periodically Review Your AI-Generated Content
Let me be clearer: Always review your AI-assisted content, but periodically compare it to your other content to learn two things: What the AI is doing right and what your humans do better. Some elements to watch out for include:
- Unique angles and storytelling.
- Factual correctness of data, statistics and claims.
- Proper citations or references for sourced information (especially now that tools like ChatGPT search the web).
- Consistency with the brand’s established tone, language and style.
- Emotional resonance with the target audience.
- Repetitive phrases or clichés.
- Alignment with current events, social trends and cultural nuances.
- Unintended biases or stereotypes in the content.
Even now, Gen AI can’t do it all. Whatever your talented team of humans can do better — which is probably most if not all of the above — let them!
Poll Your Readers
Once you’ve published some AI-assisted pieces and your audience has had a chance to take everything in knowing full-well they were produced with the help of AI, ask them how they perceive it. It doesn’t get more transparent than that! This could be a poll on social media or included in a newsletter where you ask for them to reply with their thoughts. Then, you can make adjustments to your strategy as you would any other tactic: with the data to back them up.
Here are some questions to could ask your audience about your AI content to get a pulse check, which range from general awareness and perception to preferences and improvement:
- “How do you feel about brands using AI to create content?”
- “Does knowing that content is AI-assisted change your perception of a brand?”
- “Have you noticed a difference in quality between AI-assisted content and content created without AI assistance?”
- “Would you like to see a mix of AI-assisted and human-created content?”
- “What aspects of AI-assisted content do you enjoy most?”
Of course, it’s unlikely that all the replies will be “pretty,” even if you’re doing everything right when it comes to AI content. At large, it’s still a polarizing topic that has its equal share of defenders and doubters. But don’t let that get you down.
Never Make the Same Mistake Twice
As the saying goes, you should never make the same mistake twice. In general, I’d say that statement is superfluous. I think it’s OK to make mistakes and lots of them. That’s how humans learn. But when it comes to something like this — being secretive about what you’re feeding your audience and pinning the blame on others when you get caught — this cliché definitely applies.
If The Arena Group were to let something like this slip again, it could mean irreparable reputation damage. Sounds scary, but at the end of the day, we have to take responsibility for our content, especially today. Readers deserve to be informed about what they’re consuming and organizations delivering the content are more than equipped to provide the clarity.