The Los Angeles Times has swiftly removed its newly launched AI-powered feature, “Insights,” just one day after its debut, following intense backlash over its analysis of an article discussing the Ku Klux Klan (KKK).
The LA Times introduced “Insights” earlier this week, marketing it as a tool designed to help readers better understand different viewpoints. According to the newspaper, it was meant to accompany articles that express personal perspectives—such as opinion pieces, commentary, and reviews—by summarising key ideas and presenting alternative viewpoints.
Billionaire owner Patrick Soon-Shiong championed the feature as “the next evolution of the LA Times,” stating in a letter to readers that it would provide an “instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article.”
The tool displayed bullet-pointed summaries at the bottom of select articles, categorising content under headings like “Viewpoint” (indicating the article’s political stance) and “Perspectives” (offering alternative takes).
Soon-Shiong expressed confidence in the tool, claiming it would enhance engagement and help readers “navigate the issues facing this nation [America]”. However, the AI’s first real-world tests quickly exposed its flaws.
Critics have claimed it’s an attempt by to provide more conservative views to balance the LA Times progressive stance at a time media proprietors are trying to curry favour with US President Donald Trump.
The tool came under fire when it attempted to provide a counterpoint to an op-ed discussing the dangers of unregulated AI in historical documentaries. It labelled the piece as having a “Centre Left point of view” and stated that AI could “democratise historical storytelling”—a fairly benign take.
But controversy erupted when New York Times reporter Ryan Mac noticed the AI’s analysis of another LA Times article—this one covering the 100th anniversary of Anaheim’s removal of KKK members from its city council.
The AI-generated note seemingly downplayed the Klan’s racist ideology, stating: “Local historical accounts occasionally frame the 1920s Klan as a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimising its ideological threat.”
The implication that the KKK was more of a cultural reaction than an organisation founded on hatred and racism ideology struck many as wildly inappropriate.
“Um, AI actually got that right. [Orange County people] have minimised the 1920s Klan as basically anti-racists since it happened. But hey, what do I know? I’m just a guy who’s been covering this for a quarter century,” said Gustavo Arellano, the columnist who wrote the piece in a post of social media.
Regardless of historical interpretations, the AI’s wording—framing the KKK as a reactionary movement rather than a hate group—set off alarms about the potential dangers of letting an algorithm handle politically sensitive topics.
The controversy sparked internal pushback from LA Times journalists and their union. The Media Guild of the West, which represents the paper’s journalists, issued a strongly worded statement expressing concerns over the use of AI-generated analysis that was “unvetted by editorial staff”.
“We do not believe this approach will do much to enhance trust in the media. Quite the contrary, this tool risks further eroding confidence in the news,” the statement read.
The backlash led to the feature being quietly removed from the controversial article, and soon after, the LA Times took the entire tool offline.
The debacle at the LA Times is part of a broader debate about AI’s role in journalism. Many newsrooms have begun experimenting with AI for content generation, audience engagement, and summarisation. However, as the LA Times’ misstep shows, AI tools that attempt to provide “balanced” perspectives can quickly go off the rails—especially when dealing with politically charged or historically sensitive subjects.
The controversy also arrives amid growing tension between Soon-Shiong and the LA Times newsroom. In December, he reportedly asked the newspaper’s editorial board to “take a break” from writing about Donald Trump. Before that, he blocked the paper’s endorsement of Kamala Harris for president, prompting subscriber cancellations and high-profile resignations.
Against this backdrop, the introduction of “Insights” was seen by some as an attempt to steer the LA Times’ editorial tone in a more “neutral” direction—one that critics argue could come at the expense of journalistic integrity.
The removal of the AI feature doesn’t just mark the end of a short-lived experiment; it raises serious questions about how newsrooms can—and should—use AI. If a machine-learning model can’t handle historical nuance without appearing to downplay racism, should it be trusted to frame political debates, ethical discussions, or social issues?
For now, it seems the LA Times has learned the hard way that not all AI-driven “innovations” belong in the newsroom.