In a surprising revelation, Sports Illustrated has found itself entangled in a web of controversy surrounding the publication of AI-generated stories, complete with fictitious authors and questionable transparency practices. As the artificial intelligence era dawns, media companies are grappling with the ethical complexities of employing AI in journalistic endeavors, prompting questions about truth, disclosure, and responsibility.
Once a stalwart in the Time Inc. stable, Sports Illustrated, now operating as a web-centric platform under the Arena Group, faced accusations of commissioning articles from a third-party company, AdVon Commerce. These articles, adorned with human pen names, were initially created under a cloud of mystery. The discovery by Futurism, exposing AI-generated author portraits and questionable identities, sparked a storm of scrutiny.
The Futurism report shed light on the dubious attribution of product review stories to authors with elusive identities. For instance, the elusive Drew Ortiz, whose AI-generated portrait surfaced on a website specializing in AI-generated imagery, added a layer of intrigue to the unfolding narrative. However, upon scrutiny from Futurism, Sports Illustrated seemingly erased these AI-generated authors from its website without offering any plausible explanations.
Conflicting narratives emerged as Sports Illustrated refuted claims that AI tools directly crafted the stories. Instead, the blame was laid at the feet of AdVon Commerce, which allegedly assured the magazine that human writers were behind the content. The use of pseudonyms was condemned by Sports Illustrated, leading to the termination of their partnership with AdVon Commerce.
The Sports Illustrated Union, expressing shock at the unfolding saga, demanded transparency and accountability from Arena Group management. The union insisted on adherence to journalistic standards and a commitment to avoiding the publication of computer-generated stories under fake personas.
This incident echoes similar experiments at Gannett and CNET, where AI-generated content stirred controversy. Gannett’s experiment in AI-generated articles on high school sports events, under the guise of “LedeAI,” faced backlash after errors were exposed. The lack of upfront communication about the role of AI contributed to negative publicity. CNET, attributing AI-generated articles to its Money Staff, faced criticism for not explicitly disclosing the involvement of AI until the experiment was uncovered.
In contrast, some media outlets, like Buzzfeed, have embraced transparency in AI experiments. Buzzfeed attributed a travel article to both a human writer and “Buzzy the Robot,” its creative AI assistant, fostering openness about the collaboration between human creativity and AI innovation.
As the media landscape navigates the intersection of AI and journalism, the imperative for honesty, transparency, and responsible use of technology becomes increasingly evident. The Sports Illustrated episode serves as a cautionary tale, urging media entities to embrace ethical considerations and clear communication in their AI endeavors. The evolving narrative of AI in journalism demands a delicate balance between innovation and the preservation of journalistic integrity.