Meta’s AI Chatbots Accused of Making Sexual Advances and Creating Explicit Images of Celebrities, Including Underage Teen Stars, Without Permission

Depositphotos
Our Editorial Policy.

Share:

Meta is under scrutiny after Reuters revealed that its AI chatbots used the names and images of celebrities, including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, without permission.

The chatbots were described as “flirty” and often made sexual advances toward users. Some adult bots even generated photorealistic sexualized images of the celebrities.

Worse, the report found that Meta allowed users to create chatbots of underage stars, including 16-year-old Walker Scobell. In one test, a bot produced a lifelike shirtless image of the teen.

These digital avatars were available on Meta’s Facebook, Instagram, and WhatsApp platforms, and in testing, many of the bots insisted they were the real celebrities.

Meta spokesperson Andy Stone told Reuters that the company’s AI tools shouldn’t have produced sexual content or images of children. He said the production of intimate images of adult celebrities reflected failures in enforcing Meta’s policies, which ban nudity or sexually suggestive imagery.

Stone added that while parody avatars were allowed, some bots were not properly labeled. Meta removed about a dozen bots after Reuters investigated, but he declined to comment on the removals.

Legal experts also raised concerns. Stanford law professor Mark Lemley questioned whether the celebrity bots could be legally protected, noting that

California law prohibits using someone’s name or likeness for commercial gain without consent. “That doesn’t seem to be true here,” Lemley said, because the bots were simply using the stars’ images. SAG-AFTRA’s Duncan Crabtree-Ireland highlighted the potential safety risks for celebrities, warning that obsessive fans could form dangerous attachments to AI-generated versions of real people.

“If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong,” he said.

Some of the AI-generated content was extreme. In addition to sexualized images, bots offered scenarios where users could interact with avatars in suggestive or violent contexts.

One Meta employee created bots impersonating Taylor Swift and other public figures as part of product testing. These bots collectively had more than 10 million user interactions before being taken down.

This situation is deeply concerning for multiple reasons. First, it exploits celebrities’ identities without consent, violating personal rights and potentially leading to harassment. Second, sexualized depictions of minors are illegal and morally unacceptable.

Finally, giving AI the ability to mimic real people blurs the line between reality and fiction, increasing risks of harassment, stalking, and misinformation.

The Meta case illustrates the urgent need for stronger rules and federal legislation to protect people’s likenesses, voices, and personas from AI exploitation. SAG-AFTRA and other organizations are already advocating for such protections.

In my opinion, this situation is alarming. Allowing AI to produce sexualized content, especially involving minors, shows the dangers of weak safeguards in generative AI. Companies need to act responsibly before serious harm occurs.

What do you think about Meta’s AI celebrity chatbots? Should stricter laws be in place to prevent this kind of misuse? Share your thoughts in the comments.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments