It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.
Like the comments on this post here.
https://sh.itjust.works/post/6220815
I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.
My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.
Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.
Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.
Using drugs has no inherent victim. And it is not predatory.
I could go on but im not an expert or a social worker of any kind.
Can anyone link me articles talking about this?
I’m just gonna put this out here and hope not to end up on a list:
Let’s do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.
Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.
Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn’t my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.
Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren’t stored in the ai model and don’t need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn’t in the base data.
Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it’s kind of similar: people doing illegal stuff because they have needs they can’t fulfill otherwise. If there is a place to get the “clean” stuff, much less people would go to the shady corner dealer.
In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.
Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it
I agree with this.
And I agree with this especially. Turns out a brain that was/is at least in part there to get us to procreate isn’t meant to get this itch scratched 24/7.
But to answer your concern: I will draw another comparison with addiction: Giving addicitive drugs out like candy isn’t wise just as it wouldn’t be wise to give access to generated csam to everyone. You’d need a control mechanism so that only people that need access get access. Admitedly this will deter a few people from getting their fix from the controlled instances compared to the completely free access. With drugs this seems to lead to a decrease of the amount of street-sold drugs though, so I see no reason this wouldn’t be true, at least to some extent, for csam.
I’m an advocate of safe injection sites, so I will agree somewhat here. Safe injection sites work because they identify addicts and aggressively supply them with resources to counteract the need for the addiction in the first place, all while encouraging less and less use. This is an approach that could have merit for pedophiles, but there are some issues that pop up with it as well that are unique- to consume a drug, the drug must enter the body somehow, where it is metabolized.
CSAM on the other hand, is taken in simply by looking at it. There is no “gloves on” approach to generating or handing the content without absorbing it- the best that can be hoped for is have it generated by someone completely ‘immune’ to it, which raises questions about how “sexy” they could make the content- if it doesn’t “scratch the itch” the addicts will simply turn back to the real stuff.
There is a slim argument to be made that you could actually create MORE pedophiles through classical conditioning by exposing nonpedophilic people to erotic content paired with what looks like children. You could of course have it produced and handled by recovering/in treatment pedophiles, but that sounds like it defeats the point of limited access entirely and is therefore still bad, at least to the ones in charge of distribution.
Additionally, digital content isn’t destroyed upon consumption like a drug, and you have a more minor but still real problem of content diversion, where content made for the program is spread to those not getting the help that was meant to be paired with it. This is an issue, of course, but could be rationalized as worth it so long as at least some pedophiles were being treated.
Yes there are a lot of open questions around this, especially about the who and how of generation, and tbh it makes me a bit uncomfortable to think about a system like this in detail, because it will have to include rating these materials on a “sexyness” scale which feels revolting.
[This comment has been deleted by an automated system]
You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):
Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.
You do have some interesting details though:
This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.
This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.
[This comment has been deleted by an automated system]
I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.
Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.
[This comment has been deleted by an automated system]