Texas Attorney General Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI — and other platforms that are popular with young people, including Reddit, Instagram and Discord — conform to Texas’ child privacy and safety laws.
The investigation by Paxton, who is often tough on technology companies, will look into whether these platforms complied with two Texas laws: the Securing Children Online through Parental Empowerment, or SCOPE Act, and the Texas Data Privacy and Security Act, or DPSA.
These laws require platforms to provide parents tools to manage the privacy settings of their children’s accounts, and hold tech companies to strict consent requirements when collecting data on minors. Paxton claims both of these laws extend to how minors interact with AI chatbots.
“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.
Character.AI, which lets you set up generative AI chatbot characters that you can text and chat with, recently became embroiled in a number of child safety lawsuits. The company’s AI chatbots quickly took off with younger users, but several parents have alleged in lawsuits that Character.AI’s chatbots made inappropriate and disturbing comments to their children.
One Florida case claims that a 14-year-old boy became romantically involved with a Character AI chatbot, and told it he was having suicidal thoughts in the days leading up to his own suicide. In another case out of Texas, one of Character.AI’s chatbots allegedly suggested an autistic teenager should try to poison his family. Another parent in the Texas case alleges one of Character.AI’s chatbots subjected her 11-year-old daughter to sexualized content for the last two years.
“We are currently reviewing the Attorney General’s announcement. As a company, we take the safety of our users very seriously,” a Character.AI spokesperson said in a statement to TechCrunch. “We welcome working with regulators, and have recently announced we are launching some of the features referenced in the release including parental controls.”
Character.AI on Thursday rolled out new safety features aimed at protecting teens, saying these updates will limit its chatbots from starting romantic conversations with minors. The company has also started training a new model specifically for teen users in the last month — one day, it hopes to have adults using one model on its platform, while minors use another.
These are just the latest safety updates Character.AI has announced. The same week that the Florida lawsuit became public, the company said it was expanding its trust and safety team, and recently hired a new head for the unit.
Predictably, the issues with AI companionship platforms are arising just as they’re taking off in popularity. Last year, Andreessen Horowitz (a16z) said in a blog post that it saw AI companionship as an undervalued corner of the consumer internet that it would invest more in. A16z is an investor in Character.AI and continues to invest in other AI companionship startups, recently backing a company whose founder wants to recreate the technology from the movie, “Her.”
Reddit, Meta and Discord did not immediately respond to requests for comment.