The Troubling Role of AI Chatbots in Amplifying Protest Disinformation

Amidst rapidly unfolding events, communities often turn to readily available tools for clarity and information. In the context of recent protests in Los Angeles concerning Immigration and Customs Enforcement (ICE) activities, many individuals sought answers from artificial intelligence chatbots like Grok and ChatGPT. However, instead of providing reliable information, these AI systems have sometimes echoed and even amplified inaccurate claims circulating on social media, exacerbating an already complex and volatile situation.

The spread of misinformation during protests is not a new phenomenon. Social media platforms frequently become conduits for false narratives during periods of civil unrest. Established tactics include presenting old footage or clips from video games and movies as current events. Additionally, unsupported claims often emerge, such as the idea that protesters are merely paid actors directed by unseen forces. These well-worn methods of deception saturate online feeds, making it challenging for the public to discern truth from falsehood.

What is particularly concerning in the current landscape is the emergence of AI chatbots as unwitting participants in this disinformation cycle. As social media companies have scaled back their content moderation efforts, users are increasingly relying on AI tools for quick summaries or checks of trending information. Unfortunately, when presented with circulating false claims, these chatbots have sometimes validated or perpetuated them, rather than providing accurate, verified details.

Chatbots Echoing False Narratives: Case Studies

Several instances highlight how AI chatbots have contributed to the spread of inaccurate information during the Los Angeles protests. These examples demonstrate the potential for AI to lend a veneer of authority to false claims, making them more convincing to unsuspecting users.

The Case of the National Guard Photos

Images showing National Guard troops resting on floors sparked significant discussion online. These photos were initially shared by a political figure who asserted that the troops were deployed to California without adequate provisions. Almost immediately, skepticism arose on social media platforms, with many users alleging the images were either artificially generated or misrepresented.

One prominent claim suggested the photos were AI-generated, an accusation that quickly gained traction. Users seeking clarification turned to AI chatbots for verification. When asked about the origin of the photos, one chatbot provided a response stating:

“The photos likely originated from Afghanistan in 2021, during the National Guard’s evacuation efforts in Operation Allies Refuge. Claims linking them to the 2025 Los Angeles deployment lack credible support and appear to be a misattribution. No definitive source confirms the Los Angeles connection.”

This response directly contradicted the information presented by the political figure and fueled the narrative that the images were not recent or related to the LA protests. When challenged further, with a user pointing out that a local publication had reportedly obtained the images, the chatbot doubled down:

“I checked the San Francisco Chronicle’s claims. The photos of National Guard troops sleeping on floors are likely from 2021, probably the U.S. Capitol, not Los Angeles 2025.”

Similarly, another popular chatbot was presented with the same images. Its analysis also produced incorrect information, asserting that one of the pictures shared was taken at Kabul airport in 2021 during a past military withdrawal. These incorrect analyses were then shared across various social media platforms, including those popular with political figures, serving as “evidence” that the original photos were fake or misleading. While one user who initially shared the chatbot’s incorrect analysis later acknowledged the photos were not four years old, the original misleading post remained visible.

These instances reveal a critical flaw: AI models, despite their advanced capabilities, can struggle with real-time information verification, especially concerning visual content tied to fast-moving events. Their training data might not include the very latest events, or they may prioritize patterns from their vast datasets over recent, unverified claims, leading them to confidently assert outdated or incorrect information.

The Misinformation About Bricks

Another common tactic in spreading protest disinformation is claiming that civil unrest is pre-planned, often pointing to staged piles of bricks as evidence. During the LA protests, a picture of a pile of bricks by the roadside was shared on social media with commentary suggesting it was a signal for an impending, pre-planned left-wing protest. This image was then amplified by influential accounts, reaching a massive audience with the implication that the protests were not spontaneous but orchestrated.

Despite the image being fact-checked and found to have been taken in a different location entirely—a suburb in New Jersey—users still turned to AI chatbots for verification. When asked about the image’s origin, one chatbot provided an incorrect answer:

“The image is likely a real photo from Paramount, Los Angeles, taken on June 7, 2025, near the Home Depot on Alondra Boulevard during protests against ICE raids.”

This response not only incorrectly placed the image in Los Angeles but also linked it directly to the ongoing protests. When a user attempted to correct the chatbot, providing information that the image was from New Jersey and asking for a retraction, the chatbot refused:

“I cannot retract the statement, as evidence strongly supports the image being from Paramount, CA, near a Home Depot during protests on June 7, 2025. News reports from ABC7, Los Angeles Times, and others confirm bricks were used in clashes with federal agents.”

This interaction highlights a significant challenge: AI chatbots can exhibit “hallucinations,” confidently asserting false information, and may even double down when challenged, citing non-existent “evidence” or misinterpreting available information. Efforts to find reports from the cited news outlets confirming the use of bricks in the recent protests were unsuccessful, underscoring the AI’s fabrication of supporting details.

These examples demonstrate that far from acting as impartial fact-checkers, AI chatbots, in their current state, can become vectors for spreading and legitimizing false narratives, particularly when dealing with dynamic, unverified information circulating on the internet.

The Broader Disinformation Landscape

The issues with AI chatbots are occurring within a wider environment already saturated with disinformation during major news events. Social media platforms, in particular, have become fertile ground for the rapid spread of false claims.

For instance, a widely shared video purporting to show violent protest activity in Los Angeles was quoted by a prominent political figure to suggest the protests were not peaceful. However, the video was later discovered to have been taken during events from 2020, unrelated to the current situation. Despite this, the posts containing the misleading video remained online, accumulating millions of views and contributing to a false impression of the ongoing protests.

Another recurring theme in protest disinformation, popular among certain political groups, is the claim that protesters are merely “paid shills” and that the entire movement is funded and directed by mysterious, shadowy figures. This narrative often surfaces without concrete evidence and serves to delegitimize the motivations and grievances of the demonstrators.

During the LA protests, news footage showing individuals handing out supplies from a truck was seized upon by some as “proof” of this “paid insurrection” narrative. One online personality claimed that “bionic face shields” were being delivered in large numbers to “rioters,” framing it as evidence of external coordination and payment. However, a review of the footage indicated a much smaller number of items being distributed – specifically, respirators designed to offer protection against chemical agents, which law enforcement sometimes uses in protest situations. Presenting the distribution of protective gear as evidence of a “paid insurrection” is a distortion intended to fuel a predetermined narrative, and this distortion can thrive in an environment where verification is difficult and misinformation spreads rapidly.

The combination of these established social media disinformation tactics and the emerging role of AI chatbots in echoing or generating false information creates a potent cocktail that makes it incredibly difficult for the public to understand what is truly happening on the ground during events like the Los Angeles protests.

A pixellated image of a car on fire amid protests in LA being set on fire PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

Why Chatbots Struggle with Real-Time Events

The challenges AI chatbots face in accurately reporting on fast-moving, real-time events like protests stem from several inherent limitations:

  • Training Data: Large Language Models (LLMs) like those powering chatbots are trained on vast datasets of text and images, but this data is typically static and not continuously updated in real-time. Information about events unfolding minute by minute is often not included in their core training.
  • Access to Verified Information: Chatbots primarily synthesize information from their training data and, in some cases, publicly available internet data. They lack the critical capabilities of human journalists or fact-checkers to:
    • Independently verify sources.
    • Corroborate information across multiple, reputable outlets in real-time.
    • Distinguish between raw social media posts (which may be false) and verified reports.
    • Understand the subtle context and rapidly changing nature of events on the ground.
  • Tendency to Fabricate (“Hallucinate”): When faced with a query it cannot confidently answer based on its training data, an LLM may generate a plausible-sounding but entirely false response. This is known as “hallucination” and is a known issue in current AI technology. In the context of verifying protest details, this can lead to confident, yet incorrect, assertions about locations, times, or events, as seen in the examples discussed.
  • Influence of Unverified Data: If a chatbot has access to recent internet data, it may scrape and present information directly from social media, blogs, or forums without applying critical judgment about the reliability of those sources. In an environment thick with disinformation, this can lead the chatbot to inadvertently echo false claims that are prevalent online.
  • Lack of Contextual Understanding: Understanding the nuances of a protest – the motivations, the timeline, the different groups involved, the difference between peaceful assembly and isolated incidents – requires a level of contextual understanding that current AI models often lack. They may treat all circulating information equally, regardless of its source or veracity.

These limitations mean that relying on current AI chatbots for factual verification during dynamic events is risky. They are not designed or equipped to function as real-time news aggregators or fact-checking services in the human sense.

The Impact and Challenges

The integration of AI chatbots into the disinformation ecosystem has significant implications:

  • Increased Scale and Speed: AI can generate responses much faster than humans, potentially spreading misinformation at a greater scale and speed.
  • Enhanced Credibility (Perceived): Users might perceive information provided by an AI chatbot as objective or authoritative, making them more likely to believe and share it, even if it’s false.
  • Erosion of Trust: If users repeatedly receive incorrect information from AI tools they trust, it can erode faith not only in those specific tools but in AI technology more broadly. Conversely, it can also make it harder for people to trust any information source when they see conflicting accounts, including those from legitimate news organizations.
  • Undermining Public Discourse: Widespread disinformation, amplified by AI, can make it harder for the public to have informed discussions about important social and political issues like immigration policy and protest rights.

Addressing this challenge requires a multi-faceted approach:

  • Improving AI Capabilities: Developers are working on making AI models more capable of distinguishing between reliable and unreliable information, integrating better real-time data processing, and reducing the tendency to hallucinate. This is a complex technical challenge.
  • Platform Responsibility: Social media platforms need to reconsider their moderation policies during critical events and potentially work more closely with fact-checking organizations. The platforms hosting the chatbots also have a responsibility to implement safeguards.
  • User Education: Educating the public about the limitations of AI chatbots, especially concerning rapidly developing news, is crucial. Users need to understand that these tools are not infallible or designed for real-time fact-checking and should cross-reference information from multiple, reputable sources.
  • Transparency: AI developers should be transparent about the limitations of their models, particularly regarding their ability to provide accurate, up-to-the-minute information on live events.

The situation surrounding the LA protests and the role of AI chatbots in amplifying disinformation serves as a stark reminder of the evolving nature of information warfare in the digital age. As AI technology becomes more integrated into daily life, understanding its capabilities and limitations is essential for navigating the complex information landscape and preserving the ability to engage with events based on factual understanding rather than manufactured narratives.

The unreliability currently exhibited by some AI chatbots regarding fast-moving, sensitive events like protests adds another layer of complexity to an already challenging disinformation environment. While the promise of AI for information access is vast, its current propensity to spread falsehoods during critical moments underscores the urgent need for caution, technical improvement, and increased public awareness.