We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.
But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?
AI does not exist. What we have are language prediction models. Trying to use them as an AI is foolish.
In other words, “fancy auto-complete.”
At the end of the day, isn’t that just how we work, though? We tokenise information, make connections between these tokens and regurgitate them in ways that we’ve been trained to do.
Even our “novel” ideas are always derivative of something we’ve encountered. They have to be, otherwise they wouldn’t make any sense to us.
Describing current AI models as “Fancy auto-complete” feels like describing electric cars as “fancy Scalextric”. Neither are completely wrong, but they’re both massively over-reductive.
I’ve thought a lot about this over the last few years, and have decided there’s one critical distinction: Understanding.
When we combine knowledge to come to a conclusion, we understand (or even misunderstand) that knowledge we’re using. We understand the meaning of our conclusion.
LLMs don’t understand. They programmatically and statistically combine data - not knowledge - to come up with a likely outcome. They are non-deterministic auto-complete bots, and that is ALL they are. There is no intelligence, and the current LLM framework will never lead to actual intelligence.
They’re parlour tricks at this point, nothing more.
We kinda were just temporal auto-complete, though