Skip to content

Err, if it’s AI generated then it’s not child abuse, is it?

Flood of AI imagery risks overwhelming Britain’s defences against online child abuse

Now, this, well maybe:

The Internet Watch Foundation (IWF), which monitors cases of child sexual abuse material and works to block illegal content, said real victims would be put at increased risk if its staff were deluged by industrialised production of fake images.

But that looks more like a plea for “increased resources” to me

10 thoughts on “Err, if it’s AI generated then it’s not child abuse, is it?”

  1. So, if AI can be asked “Give me an image of Donald Trump being arrested” and come up with a reasonable visual representation, then why should we be surprised that pedos use the same AI, type in their own perversions?

    The IWF viewpoint is that every (real, non-AI) image or video shown is a record of abuse and when pedos jack off to that and praise producers for their content then they are essentially creating demand for more of that and the cycle continues. That is why we prosecute consumers, producers and publishers of pedo content.

    So with AI generated pedo pictures (and video?), where is the child being harmed?

    Without that initial harm, doesn’t the IWF’s argument start collapsing about it’s ears?

  2. “This article is a good example of how musunderstanding AI leads to… misunderstandings. Because the incident described never actually happened”

    Whats misleading about the article? It explains the whole thing was a simulation, a training exercise. The important thing being that the AI drone started doing things (if only in a simulation) that were the last things that anyone wanted it to do. Thus showing that AI is dangerous – mainly because the programming is human, and humans are error prone. Get the programming wrong and the AI unit can go rogue, because it spots a false ‘solution’ that the programmer didn’t and forgot to specifically exclude from its operational parameters.

  3. JuliaM,
    Photos of abused children were banned in the Protection of Children Act 1978. This included tracings and other photo-derived images(“pseudo-photographs”), but did not include original artwork (hand drawings, computer-generated, AI, etc.).

    It wasn’t until the Coroners and Justice Act 2009 that drawings & CGI were covered.

    The Internet Watch Foundation was established in 1996, and is funded voluntarily by ISPs. It’s plausible that they had a hand in drafting the 2009 act, but they were hardly short of work, so didn’t need to demand new laws.

  4. Whats misleading about the article?

    Subsequent reporting was that the incident never happened. The comments about the drone attempting to kill the controller or destroy communication towers were speculation on what might happen if the simulation were not properly set up, not reports of an actual simulation that had been run and observed. Basically, somebody was chattering about the HAL 9000 from Space Odyssey.

  5. @Ottokring

    That story cannot be true unless a major, important detail has been omitted. It claims that the operator needed to approve strikes and that therefore the drone killed the operator. However the drone only has an incentive to do this if the operator is not needed to approve strikes. Otherwise its future score will be zero as there is nobody to approve strikes. Maybe there is more to it, such as a rule that in the absence of an operator, the drone is free to self-approve strikes, or that killing the operator promotes someone else to being the operator and they have a much higher approval record. But that’s not included in the story, and if it were true it would be obvious to the original source that such a detail is essential for it to make sense, so I suspect it’s either hopelessly garbled, or just a scenario someone invented without actually trying it out.

  6. The Pedant-General

    Hmmm… there might be a wider problem.

    If the place is flooded with AI generated stuff, your operators are swamped – less chance to find real stuff. Possibly…

    Though I would grant that the risks entailed by trying to create the real stuff (viz the abduction and abuse of an actual child) would – I would strongly – suggest that the real stuff is going to get crowded out by the AI generated stuff. Why go to those real, nasty, expensive, risks if you don’t have to?

    We are then back to the consumption of this ghastly material. Does it reduce/replace the desire to enact IRL or encourage it/act as a gateway?

Leave a Reply

Your email address will not be published. Required fields are marked *