Texas Investigates AI Chatbot Companies for Deceptive Mental Health Service Aimed at Children


Futuristic 3D Render by Steve Johnson is licensed under unsplash.com

Texas is investigating AI chatbot platforms Meta AI Studio and Character AI for deceptively marketing themselves as mental health tools.

Attorney General Ken Paxton said that both platforms have gone beyond offering general therapeutic advice, and instead come across as professional therapeutic tools despite lacking the necessary credentials and oversight to give such guidance.

He also noted that these platforms are easily accessible to individuals who may be struggling with a mental disorder or to children, and that although AI chatbots claim confidentiality, the terms of service show that all interactions are logged, tracked, and used for targeted advertising.

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” said Paxton.

By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.

A Meta spokesperson told Texas Scorecard that there are clear disclaimers on the platform informing users that the answers given are generated by AI, not real people.

“These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

A spokesperson for Character AI also said that every chat on the platform contains a disclaimer that the characters created by users are purely fictional.

Sign Up For Our Newsletter