We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.

But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?

  • scott@lemmy.org
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    3
    ·
    3 days ago

    AI does not exist. What we have are language prediction models. Trying to use them as an AI is foolish.

      • Apepollo11@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        14
        ·
        2 days ago

        At the end of the day, isn’t that just how we work, though? We tokenise information, make connections between these tokens and regurgitate them in ways that we’ve been trained to do.

        Even our “novel” ideas are always derivative of something we’ve encountered. They have to be, otherwise they wouldn’t make any sense to us.

        Describing current AI models as “Fancy auto-complete” feels like describing electric cars as “fancy Scalextric”. Neither are completely wrong, but they’re both massively over-reductive.

        • Swordgeek@lemmy.ca
          link
          fedilink
          arrow-up
          6
          ·
          2 days ago

          I’ve thought a lot about this over the last few years, and have decided there’s one critical distinction: Understanding.

          When we combine knowledge to come to a conclusion, we understand (or even misunderstand) that knowledge we’re using. We understand the meaning of our conclusion.

          LLMs don’t understand. They programmatically and statistically combine data - not knowledge - to come up with a likely outcome. They are non-deterministic auto-complete bots, and that is ALL they are. There is no intelligence, and the current LLM framework will never lead to actual intelligence.

          They’re parlour tricks at this point, nothing more.