In a world where artificial intelligence is reshaping industries, the Los Angeles Times is diving headfirst into the AI revolution. But as the publication introduces AI-generated insights and alternative viewpoints to its articles, the move is sparking both curiosity and controversy. Is this the future of journalism, or a cautionary tale in the making?

AI Meets Journalism: The “Voices” Initiative

Patrick Soon-Shiong, billionaire owner of the LA Times, recently unveiled the outlet’s new AI-powered feature called “Voices.” Articles labeled with this tag are those that take a stance or are written from a personal perspective—think opinion pieces, reviews, and commentary. But the real kicker? At the bottom of these articles, readers will now find AI-generated “Insights,” bullet points that summarize key takeaways and even offer “Different views on the topic.”

Soon-Shiong believes this approach will help readers navigate complex issues by presenting varied perspectives. “I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation,” he wrote in a letter to readers.

But not everyone is convinced.

Pushback from the Guild

The LA Times Guild, representing the paper’s unionized staff, has expressed skepticism about the initiative. Matt Hamilton, the Guild’s vice chair, acknowledged the need to distinguish news reporting from opinion pieces but questioned the wisdom of relying on unvetted AI analysis. “We don’t think this approach—AI-generated analysis unvetted by editorial staff—will do much to enhance trust in the media,” he said in a statement reported by The Hollywood Reporter.

And the concerns aren’t unfounded.

AI’s Growing Pains

Just one day into the rollout, the AI tool has already produced some eyebrow-raising results. Take, for example, a March 1st opinion piece about the dangers of using AI to create historical documentaries. The AI-generated insights labeled the article as “generally aligned with a Center Left point of view” and suggested that “AI democratizes historical storytelling.” While not entirely inaccurate, the analysis felt tone-deaf to the article’s critical stance.

Even more troubling was the AI’s take on a February 25th story about California cities that elected Ku Klux Klan members in the 1920s. One of the now-removed insights suggested that historical accounts sometimes portrayed the Klan as “a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement.” While technically true, the framing risked downplaying the Klan’s ideological threat—a misstep that left readers and journalists alike scratching their heads.

The Broader AI Landscape

The LA Times isn’t alone in experimenting with AI. Outlets like Bloomberg, The Wall Street Journal, and The Washington Post are also leveraging the technology, though typically for tasks like data analysis or content summarization rather than editorial assessments. Still, the risks of AI missteps are well-documented. From MSN’s AI news aggregator recommending a food bank as a tourist lunch spot to Apple’s notification summaries mangling headlines, the potential for embarrassment—or worse—is real.

As the LA Times forges ahead with its AI experiment, the question remains: Can AI enhance journalism without compromising its integrity? Or will this bold move end up eroding trust in an already fragile media landscape?