I came across Nepenthes today in the comments under a post about AI mazes. It has an option to purposefully generate not just an endless pit of links and pages, but also to deterministically generate random, human-like text for those pages to poison the LLM scrapers as they sink into the tarpit.

After reading that, I thought, could you do something similar to poison image scrapers too?

Like if you have an art hosting site, as long as you can get an AI to fall into the tarpit, you could replace all the art it thinks should be there with distorted images from a dataset.

Or just send it to a kind of “parallel” version of the site that replaces (or heavily distorts) all the images but leaves the text descriptions and tags the same.

I realize there’s probably some sort of filter for any automated image scraper that attempts to sort out low quality images, but if one used similar images to the expected content, that might be enough to get through the filter.

I guess if someone really wanted to poison a model, generating AI replacement images would probably be the most effective way to speed up model decay, but that has much higher energy and processing power overhead.

Anyway, I’m definitely not skilled/knowledgeable enough to make this a thing myself even just as an experiment. But I thought you all might know if someone’s already done it, or you might find the idea fascinating.

What do you think? Any better ideas / suggestions for poisoning art scraping AI?

  • Lumidaub@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    5 days ago

    I’d imagine auto generating images that look meaningful but aren’t is a lot more involved than generating text. For images we have Glaze and Nightshade which you can apply to your own pictures to protect (Glaze) and/or poison (Nightshade) them.

    • hallettj@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      The images probably don’t have to look meaningful as long as it is difficult to distinguish them from real images using a fast, statistical test. Nepenthes uses Markov chains to generate nonsense text that statistically resembles real content, which is a lot cheaper than LLM generation. Maybe Markov chains would also work to generate images? A chain could generate each pixel by based on the previous pixel, or based on neighbors, or some such thing.

    • hihi24522@lemm.eeOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      5 days ago

      I’d heard of Glaze before and Nightshade seems useful, but only Glaze protects against mimicry and the Nightshade page makes it seem like the researchers aren’t sure how well the two would do together.

      It looks like Nightshade is doing what I described (though on a single image basis) of trying to trick the AI into believing the characteristics of one thing apply to another, but I’d imagine that poisoning could be much more potent if the constraint of “still looks the same to a human” were voided.

      If you know you’re feeding an AI, you can go all out on the poisoning. No one cares what it looks like as long as the AI thinks it’s valid.

      As for the difficulties in generating meaningful images, it would certainly be more intense than Markov chain text generation, but I think it might not be that hard if you just modify the real art from the site.

      Say you just slapped a ton of Snapchat filters on an artwork, or used a blur tool in random places, or drew random line segments that are roughly the same color as their nearby pixels, and maybe shift the hue and saturation. I bet small modifications like that could slip through quality filters but still cause damage to the model.


      Edit: Just realized this might sound like I’m suggesting that messing up the art shown on the site through more destructive means would be better than Glaze or Nightshade. That’s not what I meant.

      Those edit suggestions were only for the art shown in the tarpit, so you’d only make those destructive modifications to the art you’re showing the AI scrapers. The source images shown to human patrons can remain unedited.

      • Lumidaub@feddit.org
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        5 days ago

        It’s probably not perfect, but this is what we have right now. I only nightshade my stuff because I don’t think anyone will want to imitate my style anyway (I’m years and years away from that kind of recognition) and I think it’s more important to go on the offensive. Ideally, imho everybody should always at least nightshade every image file they upload, drawing, painting, photograph, not just “art” but anything visual.