astralikacastle:
inneskeeper:
el-shab-hussein:
vague-humanoid:
cyberglittter:
being a woman is fucking exhausting. everything is created to disgrace our lives. this is horrifying.
https://www.washingtonpost.com/technology/2020/10/20/deep-fake-nudes/
The website promises to make “men’s dreams come true.” Users upload a photo of a fully clothed woman of their choice, and in seconds, the site undresses them for free. With that one feature, it has exploded into one of the most popular “deepfake” tools ever created.
Far more advanced than the now-defunct “DeepNude” app that went viral in 2019, this new site has amassed more than 38 million hits since the start of this year, and has become an open secret in misogynist corners of the web. (HuffPost is not naming the site in order to avoid directing further traffic to it.) It went offline briefly Monday after HuffPost reached out to its original web host provider, IP Volume Inc., which quickly terminated its hosting services. But the site was back up less than a day later with a new host — as is often the case with abusive websites.
“Hany Farid, a computer scientist at UC-Berkeley who specializes in digital-image forensics and was not involved in the original pix2pix research, said the fake-nude system also highlights how the male homogeneity of AI research has often left women to deal with its darker side.
AI researchers, he said, have long embraced a naive techno-utopian worldview that is hard to justify anymore, by openly publishing unregulated tools without considering how they could be misused in the real world.
“It’s just another way people have found to weaponize technology against women. Once this stuff gets online, that’s it. Every potential boyfriend or girlfriend, your employer, your family, may end up seeing it,” Farid said. “It’s awful, and women are getting the brunt of it.
“Would a lab not dominated by men have been so cavalier and so careless about the risks?” he added. “Would [AI researchers] be so cavalier if that bad [stuff] was happening to them, as opposed to some woman down the street?””
“AI researchers, he said, have long embraced a naive techno-utopian worldview that is hard to justify anymore, by openly publishing unregulated tools without considering how they could be misused in the real world.”
I don’t wanna tell you this is Not deeply wrong. It’s deeply wrong.
I just, people have been photoshopping celebrities manually onto nudes and writing porn about random real life people for a good long while now. This isn’t new, it’s just making it near-effortless on the part of any bad actor individual.
That’s an important part of the anti-AI movement, actually, so genuinely thanks for bringing it up!
Because you’re right! I’ve seen a lot of people defend AI against criticism of stuff like the use of deepfakes because “We’ve been creating misinformation since the days of film!”. They say that yes, misinfo is bad, but it’s important to remember that we’ve been doing the stuff that AI is currently able to do for decades and longer. It’s a new tool in the toolbox, as it were, but the base situation isn’t changed: You need to fact check, cite sources, and remember that people and governments lie on the internet.
The problem with this argument being something which negates anti-AI criticism is that it doesn’t take into account a very important factor:
Ease of use.
Sure, we’ve been doctoring photos since film and darkrooms were still a thing. We’ve been photoshopping celebrity faces onto naked bodies since we’ve had photoshop. Revenge porn exists, and revenge fake porn does too! There’s altered photos of women hugging cops that never needed DALL-E’s existence to happen. All of this is true.
But before AI as it is now, you had to actually be really good at doctoring images to be able to spread convincing misinformation. Bad and mediocre Photoshop jobs are extremely easy to notice, and many of the common ways you hide bad Photoshop jobs are also known and when they’re used, the image is treated with suspicion. Artifacting, artificial blur, and low pixel count/low quality images have been God’s gift to cryptids since we first saw Bigfoot walking that way. In recent years we’ve even gotten to be creative and combine that with 3d animation to create videos of that sort of thing! Neat!
Now all I need to do to create and spread extremely convincing misinformation is type into Stable Diffusion “Vladimir Putin and Bernie Sanders shaking hands”. Now all I need to do to make porn of literally any woman to every exist on the Internet is to use the new model talked about in the journal report. I don’t need to spend hours working on a single image (nor the months and years to learn to do so well and efficiently). I can generate dozens and dozens in a couple hours, all just by clicking a button or two, writing an alt-text for a image that never existed, or uploading a photo or two.
The problem right now with AI-ethics isn’t that people are able to plagiarize and steal from artists nor that people are able to design misinformation via doctored images.
The problem right now is “Now everyone can do it extremely quickly with zero effort and nearly no oversight legal or otherwise to stop them if they want”.