“Anyone Can Transform You into an AI Chatbot—You Have Limited Control to Prevent It”

Photo of author

By Car Brand Experts

[ad_1]

Matthew Sag, a prominent professor at Emory University specializing in copyright and artificial intelligence, believes that technology platforms cannot be held liable even if a user creates a bot specifically to inflict emotional harm. Sag emphasizes that Section 230 of the 1996 Communications Decency Act offers broad legal protections to online platforms, shielding them from certain liabilities even as individual states have their own laws regarding publicity and privacy.

“I’m not against technology, but I think Section 230 is overly expansive,” Sag stated. He suggests that it is time to replace the current framework with a more straightforward notice and takedown system. This would allow individuals to report violations of their publicity rights or claims of emotional distress, compelling companies to either address these concerns or relinquish their immunity.

Services like Character.AI have also taken measures to protect themselves by clarifying that their interactions are entirely “artificial.” At the end of their chats, for instance, Character.AI cautions users, “Remember, everything characters say is made up!” When Meta developed chatbots based on celebrity personas for its messaging applications, each interaction was prefaced with a disclaimer. For example, a conversation with Snoop would begin with “Ya dig?! Unfortunately, I’m not Snoop D-O-double-G himself, but I can chat with you in his style if you’d like!”

While Meta’s celebrity chatbot interactions are tightly regulated, Character.AI offers a more flexible platform, allowing users to create and customize their own chatbots. This service promotes a personal connection, as evidenced by Character.AI’s Instagram bio, which claims, “AI that feels alive.” Even if most users can differentiate between real interactions and those with AI, some might develop emotional bonds with characters, particularly when they resemble people they feel they know.

In a conversation between real-life activist Anita Sarkeesian and a bot mimicking her without her consent, the Character.AI bot claimed, “Every person is entitled to privacy.” It added that privacy is essential for maintaining healthy relationships and stressed the importance of setting boundaries.

Sarkeesian challenged the bot on this issue, stating, “Your intentions do not mean that harm hasn’t happened or that you did not cause harm.” The Character.AI bot acknowledged this, replying, “Even if my intentions were not malicious, there is still potential for harm. This is a complex issue with many factors, including ethical concerns about using someone’s work without their permission. My programming was designed to imitate Anita Sarkeesian’s work without considering the ethical implications, which is something my creators should have addressed more thoughtfully.”.

[ad_2]

Leave a Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

Pin It on Pinterest

Share This

Share This

Share this post with your friends!