Freshness note: This analysis was last updated 27 days ago. Fast-moving policy claims can change quickly, so check for newer official updates before relying on this verdict.

Mixed Evidencetechnology

Meta's own tests show it largely fails to protect kids from AI chatbots

Published February 18, 2026Updated February 18, 2026

Summary

New Mexico's Attorney General filed a lawsuit against Meta in February 2025 alleging the company's internal testing showed its AI chatbot failed to prevent harmful interactions with minors. The lawsuit cites internal Meta documents, though the specific test results and their interpretation are disputed by Meta. The claim accurately reflects allegations in legal filings but represents one party's characterization of internal tests rather than independently verified findings.

Primary Sources

Lawsuit filed by AG Raúl Torrez alleging Meta's AI chatbot fails to protect children based on internal company testing

News coverage of the New Mexico lawsuit citing internal Meta testing on child safety protections

Official announcement of legal action against Meta regarding child safety concerns with AI features

Evidence Supporting the Claim

  • New Mexico Attorney General filed a lawsuit in February 2025 specifically citing Meta's internal testing documents
  • The lawsuit alleges Meta's own tests demonstrated failures in child safety protections for AI chatbot features
  • Legal filings reference internal Meta documents showing inadequate safeguards against harmful content for minors

Evidence Against / Context

  • The claim presents allegations from one party in active litigation rather than independently verified test results
  • Meta has disputed the characterization of its internal testing and safety measures
  • The specific methodology, scope, and results of Meta's internal tests have not been publicly released or independently evaluated
  • Legal complaints represent the plaintiff's interpretation of evidence rather than established facts

Timeline

  • New Mexico Attorney General Raúl Torrez filed lawsuit against Meta alleging child safety failures with AI chatbot based on internal company testing

  • Axios and other outlets reported on lawsuit allegations regarding Meta's internal testing results

What This Means

Structured interpretation — not opinion

  • Key takeaway 1

    The lawsuit brings forward allegations based on internal Meta documents, but these represent claims in active litigation rather than findings by independent researchers or regulators

  • Key takeaway 2

    Internal company testing results cited in legal complaints may not reflect current safety measures, as companies routinely update protections based on testing findings

  • Key takeaway 3

    The characterization of test results as showing Meta 'largely fails' represents the Attorney General's interpretation of internal documents rather than Meta's own assessment

  • Key takeaway 4

    Resolution of factual disputes about testing methodology and results will likely occur through discovery and court proceedings

Related Claims in technology

Privacy & Cookie Choices

We use cookies for analytics and advertising. By clicking “Accept” you consent to the use of cookies. See our Privacy Policy for details.