AI technology has given rise to an alarming trend: websites that use AI to “undress” women and girls in photos, creating disturbingly realistic deepfake nude images. In response, the San Francisco City Attorney’s office has initiated legal action against 16 of the most frequented sites involved in this unethical practice.
The lawsuit was prompted by Yvonne Meré, San Francisco’s chief deputy city attorney, who became aware of young boys using these “nudification” apps to turn pictures of their clothed female classmates into explicit images. As a mother herself, Meré was determined to take action. With the support of her team, she crafted a lawsuit aimed at shutting down these sites, as reported by The New York Times.
City Attorney David Chiu highlighted the sinister nature of these sites, explaining that their AI models were trained using real pornography and images of child exploitation to generate the deepfakes. He pointed out that once these manipulated images circulate, it’s nearly impossible to trace their origin back to a specific site.
The lawsuit asserts that these sites violate both state and federal laws, including those against revenge pornography and child pornography, as well as California’s Unfair Competition Law. The problem of non-consensual deepfake images isn’t new. In 2020, a Telegram bot was found to have produced hundreds of thousands of fake nude pictures of women using social media photos. As AI technology has advanced, the realism of these images has increased, leading to incidents like the explicit fake images of Taylor Swift that surfaced in January, prompting calls for legislative action.
In response, lawmakers have recently introduced the No Fakes Act of 2024. This bipartisan bill seeks to hold those who create, host, or distribute non-consensual AI-generated content legally accountable, aiming to protect individuals from this invasive misuse of technology.