I came across Nepenthes today in the comments under a post about AI mazes. It has an option to purposefully generate not just an endless pit of links and pages, but also to deterministically generate random, human-like text for those pages to poison the LLM scrapers as they sink into the tarpit.

After reading that, I thought, could you do something similar to poison image scrapers too?

Like if you have an art hosting site, as long as you can get an AI to fall into the tarpit, you could replace all the art it thinks should be there with distorted images from a dataset.

Or just send it to a kind of “parallel” version of the site that replaces (or heavily distorts) all the images but leaves the text descriptions and tags the same.

I realize there’s probably some sort of filter for any automated image scraper that attempts to sort out low quality images, but if one used similar images to the expected content, that might be enough to get through the filter.

I guess if someone really wanted to poison a model, generating AI replacement images would probably be the most effective way to speed up model decay, but that has much higher energy and processing power overhead.

Anyway, I’m definitely not skilled/knowledgeable enough to make this a thing myself even just as an experiment. But I thought you all might know if someone’s already done it, or you might find the idea fascinating.

What do you think? Any better ideas / suggestions for poisoning art scraping AI?

  • hihi24522@lemm.eeOP
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    4 days ago

    Nice straw man infographic, but I’m not sure how it’s relevant.

    My post was about methods to poison art scraping models. I said nothing about my reasons for doing so, maybe I just like fucking up corpos. Maybe I just like thinking about interesting topics and hearing other people’s ideas.

    Kind of sad that you’re worked up enough about this to both miss the point, and to have an infographic on hand just in case you get offended by anyone not praising generative AI.

    If you do have any knowledge of how AI functions, I’d be happy to hear your thoughts on the topic which, again, is on how to poison models that use image scrapers, not the ethics of AI or lack thereof.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Fuck with their noise models.

      Create a system that generates pseudorandom hostile noise (noise that triggers neural feature detection) and layer it on top of the image. This will create false neural circuits.

      • hihi24522@lemm.eeOP
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        That’s the intent behind Nightshade right?

        Would overlaying the image with a different, slightly transparent image be enough to shift the weights? Or is there a specific method of pseudorandom hostile noise generation that you’d suggest?

        I’d imagine the former is likely more computationally efficient, but if the latter is more effective at poisoning and your goal is to maximize damage regardless of cost, then that would be the better option.

        • vivendi@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Every image has a few color channels/layers. If it’s a natural photograph, the noise patterns in these layers are different. If it’s AI diffusion however those layers will be uniform.

          One thing you can do is to overlay noise that resembles features that don’t exist (using e.g Stable Diffusion) inside the color channels of a picture. This will make AI see features that don’t exist.

          Nightshade layers some form of feature noise on top of an image as an alpha inlaid pattern which makes the quality of the image look ASS and it’s also defeated if a model is specifically trained to remove nightshade.

          Ultimately this kind of stupid arms race shit is futile. We need to adapt completely new paradigms for completely new situations.

              • hihi24522@lemm.eeOP
                link
                fedilink
                arrow-up
                1
                ·
                3 days ago

                With aggressive scrapers, the “change” is having sites slowed or taken offline, being basically DDOSed by scrapers ignoring robots.txt.

                What is your proposed adaptation that’s better than countermeasures? Be rich enough to afford more powerful hardware? Simply stop hosting anything on the internet?

                • vivendi@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 days ago

                  If you just want to stop scrapers use Anubis why tf are you moving your own goalpost?

                  Also if your website is trash enough that it gets downed by 15,000 requests you should either hire a proper network engineer or fire yours like wtf man I made trash tier Wordpress websites that handled magnitudes more in 2010

                  EDIT: And stop using PHP in the case of ScummVM. Jesus Christ this isn’t 2005

                  • hihi24522@lemm.eeOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 days ago

                    Oh when you said arms race I thought you were referring to all anti-AI countermeasures including Anubis and tarpits.

                    Were you only saying you think AI poisoning methods like Glaze and Nightshade are futile? Or do you also think AI mazes/tarpits are futile?

                    Both kind of seem like a more selfless version of protection like Anubis.

                    Instead of protecting your own site from scrapers, a tarpit will trap the scraper, stopping it from causing harm to other people’s services whether they have their own protections in place or not.

                    In the case of poisoning, you also protect others by making it more risky for AI to train on automatically scraped data which would deincentivize automated scrapers from being used on the net in the first place