I came across Nepenthes today in the comments under a post about AI mazes. It has an option to purposefully generate not just an endless pit of links and pages, but also to deterministically generate random, human-like text for those pages to poison the LLM scrapers as they sink into the tarpit.
After reading that, I thought, could you do something similar to poison image scrapers too?
Like if you have an art hosting site, as long as you can get an AI to fall into the tarpit, you could replace all the art it thinks should be there with distorted images from a dataset.
Or just send it to a kind of “parallel” version of the site that replaces (or heavily distorts) all the images but leaves the text descriptions and tags the same.
I realize there’s probably some sort of filter for any automated image scraper that attempts to sort out low quality images, but if one used similar images to the expected content, that might be enough to get through the filter.
I guess if someone really wanted to poison a model, generating AI replacement images would probably be the most effective way to speed up model decay, but that has much higher energy and processing power overhead.
Anyway, I’m definitely not skilled/knowledgeable enough to make this a thing myself even just as an experiment. But I thought you all might know if someone’s already done it, or you might find the idea fascinating.
What do you think? Any better ideas / suggestions for poisoning art scraping AI?
There’s Nightshade to generate poisoned versions of images, but I don’t know if anyone has made a tarpit yet.
I’m planning on making my own tarpit software just so we can have multiple implementations and be harder to detect ! Seems like a good project to learn some Rust !
Probably going to name it Asbestos.
I guess diversity of tactics probably is a good way to stop scrapers from avoiding the traps we set. Good on you for helping out. Also I like the name lol
On a slightly unrelated note, is rust a web dev language? I’ve been meaning to learn it myself since I’ve heard it’s basically a better, modern alternative to C++
Upload 5 million pictures of people with 4 fingers instead of 5.
You should make one using AI, on every page load it generates a random image and has the caption of something completely different
AI wars
It might be easier to make a few images with some anti-AI patterns, and then give them randomly generated file names and paths. If needed, you could do some subtle transformations each time but generating a new image each time might be more effort than its worth
Cloudflare does use AI to generate tarpit content, but it’s too expensive to run for every request. IIRC, they periodically change and cache the output to throw off the tarpit detectors.
One thing most DON’T do is change up the format for the page, so the placement and number of links randomize.
This thought did cross my mind, but I bet the quality filters probably check for the relevancy of words. And if they don’t already, it wouldn’t take long for them to implement a simple fix.
Generating an AI image based on the text you randomly generate would satisfy this and still cause model decay, but in both cases, generating AI images is pretty costly which means it’s not a very viable attack option for most people.
- Have some reasonably complex js that conditionally shows an image. (This may not need to be complex, I’m not knowledgeable on AI scrapping algos)
- Route all images through this js function
- Generate 100s of garbage images with captions.
- Add those to html source (but don’t show them to users re: step 1)
Pseudo code:
<img src=“img” hidden=“my_func(false)”/>
I’m not a web-dev (my liver can’t take the alcohol necessary to learn it), but you should get the idea.
Or just throw the garbage images into the existing AI maze.
Oh wow I’m dense, I didn’t even think about the fact that scrapers probably don’t render the full webpage and instead just seek out images in the HTML lol
This seems like a much easier to set up trap than creating a tarpit and then serving bullshit images.
Would it negatively impact the loading times for regular users? Like would it take significant amounts of time for the webpage to load if you added hundreds of these hidden images?
I’m by no means knowledgeable on webdev stuff so I don’t know the performance implications.
I’d imagine auto generating images that look meaningful but aren’t is a lot more involved than generating text. For images we have Glaze and Nightshade which you can apply to your own pictures to protect (Glaze) and/or poison (Nightshade) them.
The images probably don’t have to look meaningful as long as it is difficult to distinguish them from real images using a fast, statistical test. Nepenthes uses Markov chains to generate nonsense text that statistically resembles real content, which is a lot cheaper than LLM generation. Maybe Markov chains would also work to generate images? A chain could generate each pixel by based on the previous pixel, or based on neighbors, or some such thing.
I’d heard of Glaze before and Nightshade seems useful, but only Glaze protects against mimicry and the Nightshade page makes it seem like the researchers aren’t sure how well the two would do together.
It looks like Nightshade is doing what I described (though on a single image basis) of trying to trick the AI into believing the characteristics of one thing apply to another, but I’d imagine that poisoning could be much more potent if the constraint of “still looks the same to a human” were voided.
If you know you’re feeding an AI, you can go all out on the poisoning. No one cares what it looks like as long as the AI thinks it’s valid.
As for the difficulties in generating meaningful images, it would certainly be more intense than Markov chain text generation, but I think it might not be that hard if you just modify the real art from the site.
Say you just slapped a ton of Snapchat filters on an artwork, or used a blur tool in random places, or drew random line segments that are roughly the same color as their nearby pixels, and maybe shift the hue and saturation. I bet small modifications like that could slip through quality filters but still cause damage to the model.
Edit: Just realized this might sound like I’m suggesting that messing up the art shown on the site through more destructive means would be better than Glaze or Nightshade. That’s not what I meant.
Those edit suggestions were only for the art shown in the tarpit, so you’d only make those destructive modifications to the art you’re showing the AI scrapers. The source images shown to human patrons can remain unedited.
It’s probably not perfect, but this is what we have right now. I only nightshade my stuff because I don’t think anyone will want to imitate my style anyway (I’m years and years away from that kind of recognition) and I think it’s more important to go on the offensive. Ideally, imho everybody should always at least nightshade every image file they upload, drawing, painting, photograph, not just “art” but anything visual.
You’re looking for something like Nightshade. The chuds into accelerating the rapid destruction of all energy resources are aware and capable of defending so it’s not as solid. I don’t know if image datasets are scraped or manually compiled so YMMV; I do know the more poison you throw out the better.
Looking in that thread it seems to be largely small scale/fine tuning where the data is cherrypicked… they’re not considering the corporations which create the models they’re finetuning in the first place
That thread was hard to read. I do sometimes feel bad for people who don’t understand artists because they don’t realize they have the capability to be artists themselves.
I do realize that any poisoning of models won’t stop backups of the pre-decay models from being utilized, but if we make the web unscrapable, it will slow or even prevent art from being stolen in the future.
I highly doubt the big AI companies get people to screen the scraped images (at least not all of them) because the whole point in their mind is to remove the need to pay people for work lol
Benn’s video may give you ideas: https://youtu.be/xMYm2d9bmEA
Lol, LMAO even
Have an infographic
Nice straw man infographic, but I’m not sure how it’s relevant.
My post was about methods to poison art scraping models. I said nothing about my reasons for doing so, maybe I just like fucking up corpos. Maybe I just like thinking about interesting topics and hearing other people’s ideas.
Kind of sad that you’re worked up enough about this to both miss the point, and to have an infographic on hand just in case you get offended by anyone not praising generative AI.
If you do have any knowledge of how AI functions, I’d be happy to hear your thoughts on the topic which, again, is on how to poison models that use image scrapers, not the ethics of AI or lack thereof.
Fuck with their noise models.
Create a system that generates pseudorandom hostile noise (noise that triggers neural feature detection) and layer it on top of the image. This will create false neural circuits.
That’s the intent behind Nightshade right?
Would overlaying the image with a different, slightly transparent image be enough to shift the weights? Or is there a specific method of pseudorandom hostile noise generation that you’d suggest?
I’d imagine the former is likely more computationally efficient, but if the latter is more effective at poisoning and your goal is to maximize damage regardless of cost, then that would be the better option.
Every image has a few color channels/layers. If it’s a natural photograph, the noise patterns in these layers are different. If it’s AI diffusion however those layers will be uniform.
One thing you can do is to overlay noise that resembles features that don’t exist (using e.g Stable Diffusion) inside the color channels of a picture. This will make AI see features that don’t exist.
Nightshade layers some form of feature noise on top of an image as an alpha inlaid pattern which makes the quality of the image look ASS and it’s also defeated if a model is specifically trained to remove nightshade.
Ultimately this kind of stupid arms race shit is futile. We need to adapt completely new paradigms for completely new situations.
Isn’t that what the arms race is? Adapting to new situations?
It’s not adapting to change, it is fighting change
With aggressive scrapers, the “change” is having sites slowed or taken offline, being basically DDOSed by scrapers ignoring robots.txt.
What is your proposed adaptation that’s better than countermeasures? Be rich enough to afford more powerful hardware? Simply stop hosting anything on the internet?