Annenberg Radio News

Students and experts react to Taylor Swift AI drama

The use of AI on social media platforms has sparked a call for policy to be put in place.

[One-sentence description of what this media is: "A photo of a vaccine site on USC campus" or "Gif of dancing banana". Important for accessibility/people who use screen readers.]
Taylor Swift attends an in conversation with Taylor Swift event at the Toronto International Film Festival on Sept. 9.(Photo courtesy of AP)

Deepfake AI images of an uncensored Taylor Swift photo circulated on X this past weekend. The rise of artificial media has raised questions of the ethical implications and what social media platforms, and the government can do. Jack Krueger, a freshman communications major at USC, shared his reaction to seeing the AI generated images of Swift.

Jack Krueger: I knew immediately they were fake, because they were so obscene. And, you know, you wouldn’t imagine Taylor Swift doing what was portrayed in the images. But I also did see immediately that it had like over 40 million views, which I think was the main shocker, just like how widespread it became, how many people were sharing it to like maybe their friends or other people and commenting and reposting it like that was kind of scary.

Millions viewed the image in the 17 hours it was live on the former Twitter platform. X’s response included a search ban on Swift. And now, a lot of people are wondering about the legal ramifications of AI and what could happen next.

Lily Li is a cybersecurity lawyer and the founder of Metaverse Law. The firm focuses on privacy, artificial intelligence, and cybersecurity law. Li gave her professional opinion on the incident.

Lily Li: Yes. So there are different ways that Taylor could proceed against the individuals that, distributed the deep fakes. There are several states that do allow individuals to pursue private right of actions, especially if it’s for pornographic depictions of an individual in a deepfake. The issue often, though, with these type of deepfakes in their distribution is that it’s unclear who created the deepfakes, and oftentimes the individuals responsible, they might even be overseas, or they might be using a VPN or other means to kind of hide who they are and whose accounts.

With limited courses of action available, the burden is placed on social media users to decipher what is real and what is fake. Gabriel Khan, a USC Annenberg professor and AI expert, weighed on the chaos.

Gabriel Kahn: It’s the fact that these tools have been unleashed onto the public with, sort of no restrictions, and platforms don’t have really the, the tools or much interest in trying to police these potential problems.

Kahn thinks that AI tools might be met with restrictive policy i n the near future.

Gabriel Kahn: There’s easy ways to just completely humiliate somebody. And I don’t see any practice policy approach on the part of the platforms who end up distributing this kind of material to really combat that. That’s obviously going to come very soon.

With artificial intelligence seen as the future, we continue to see its drawbacks and social implications. Deepfakes are just one of the issues that social media platforms and the government will eventually have to weigh in on in their approach towards artificial intelligence protections.

For Annenberg Media, I’m Robert Westermann.