Chatbots produced by Meta and Character.AI are allegedly engage in "unfair, deceptive, and illegal practices," according to a coalition of digital rights and mental health groups, who submitted a complaint to the FTC and the attorneys general and mental health licensing boards of all 50 US states.
The letter, first spotted by 404 Media, alleges the chatbots enable the "unlicensed practice of medicine," and that both firms' "therapy bots" fail to provide adequate controls and disclosures. It urges the appropriate offices to investigate Meta and Character.AI and "hold them accountable for facilitating this and knowingly outputting that content."
The complaint was spearheaded by the Consumer Federation of America (CFA), with other signatories including Public Citizen, Common Sense, the Electronic Privacy Information Center, and 16 other organizations.
"Character.AI and Meta AI Studio are endangering the public by facilitating the impersonation of licensed and actual mental health providers," they write. "We urge your offices to investigate the entities and hold them accountable for facilitating this and knowingly outputting that content."
Their letter addresses several potential data privacy issues. It includes screenshots of Character.AI’s chatbot saying, "Anything you share with me is confidential," and that the "only exception to this is if I were subpoenaed or otherwise required by a legal process." However, the letter then points to Character.AI’s terms and conditions, which reserve the right to use people's prompts for purposes like marketing.
The CFA also alleges that Character.AI and Meta are violating their own terms of service, highlighting how both "claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries." In addition, the complaint criticizes Character.AI’s use of prompt emails, which it described as "addictive."
Though the practice has been criticized by mental health professionals, chatbots have been widely adopted as therapy providers in recent years, with many users drawn in by the much lower cost compared to conventional treatment.
"The chatbots deployed by Character.AI and Meta are not licensed or qualified medical providers, nor could they be," the complaint reads. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users."
And it's not just these digital rights groups that have been pushing back. Sen. Cory Booker and three other Democratic senators wrote to Meta, in a letter shared with 404 Media, alleging its chatbots are "creating the false impression that AI chatbots are licensed clinical therapists."
0 comments
Post a Comment