I know it when I see the URL

The phrase in today’s screed is adapted from an infamous 1964 Supreme Court decision, when Potter Stewart was asked to define obscenity. In a report issued today by Stanford researchers, the new phrase (when referring to materials relating to abused children) has to do with recognizing a URL. And if you thought Stewart’s phrase made it hard to create appropriate legal tests, we are in for an even harder time when it comes to figuring out how to prevent this in the age of GenAI and machine learning. Let me explain.

If you are trying to do research into what is called child abuse materials (abbreviated CSAM, and you can figure out the missing word on your own), you have a couple of problems. Firstly, you can’t actually download the materials to your own computer, not unless you work for law enforcement and do the research under conditions akin to when intelligence operatives are in a secured facility (now made infamous and called a SCIF).

This brings me to my second point. Since you can’t examine the actual images, what you are looking at are bunches of URLs that point to the actual images. URLs are also used instead of the actual images because of copyright restrictions. And that means looking at metadata, which could be in a variety of languages, because let’s face it, CSAM knows no geographic boundaries. The images are found by sending out what is called “crawlers” that examine every web page they can find at a point in time.

Next, and this comes as no surprise to anyone who has spent at least one minute browsing the web, there is a lot of CSAM out there: billions of files as it turns out. The Stanford report found more than three thousand suspected images. Now that doesn’t seem like a lot, but when you consider that they probably didn’t catch most of it (they acknowledge a severe undercounting), it is still somewhat depressing.

Also, we (and by that, I mean most everyone in the world) are too late to try to prevent this stuff from being disseminated. That is a more complicated explanation and has to do with the way the GenAI and mathematical models have been constructed. The optimum time to have done this would have been, oh, two or more years ago, back before AI became popular.

The main villain in this drama is something called the large-scale AI open network or LAION-5B model, which contains 5.85B data elements, half of which are in English. This is the data that is being used to train the more popular AI tools, such as Stable Diffusion, Google’s Imagen, and others.

The Stanford paper lays out the problems in examining LAION and the methodology and tools that they used to find the CSAM images. They found that anyone using this model has thousands of illegal images in their possession. “It is present in several prominent ML training data sets,” they state. While this is a semi-academic research paper, it is notable in that they provide some mitigation measures to remove this material. There are various steps that are mostly ineffective and some that are difficult to pull off, especially if your goal is to remove the material entirely from the model itself. I won’t get into the details here, but there is one conclusion that is most troubling:

The GenAI models are good at creating content, right? This means we can take a prototype CSAM image and have it riff on creating various versions, using say the face of the same child in a variety of poses for example. I am sure that is being done in one forum or another, which make me sick.

Finally, there is the problem of timing. “if CSAM content was uploaded to a platform in 2018 and then added to a hash database in 2020, that content could potentially stay on the platform undetected for an extended period of time.” You have to update your data, which is certainly being done but there is no guarantee that your AI is using the most recent data, which has been well documented. The researchers recommend better reporting feedback mechanisms when a user finds these materials by Hugging Face and others. UC Berkeley professor Hany Farid wrote about this issue a few years ago, and said these materials “should not be so easily found online. Neither human moderation nor AI is currently able to contend with the spread of harmful content.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.