Coding isn’t special you are right, but it’s a thinking task and LLMs (including reasoning models) don’t know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn’t learn to think from that. That’s why LLMs can’t replace humans.
That does certainly not mean that software can’t be smarter than humans. It will and it’s just a matter of time, but to get there we likely have AGI first.
To show you that LLMs can’t think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it “saw” the entire Wikipedia article on how xxo works during training, that it’s a solved game, different strategies and how to consistently draw - but still it can’t do it. It loses most games against my four year old niece and she doesn’t even play good/perfect xxo.
I wouldn’t trust anything, which is claimed to do thinking tasks, that can’t even beat my niece in xxo, with writing firmware for cars or airplanes.
LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can’t think. For now, but likely we’ll need different architectures for real thinking models than LLMs have.
Probably won’t happen under capitalism. It’s way too expensive (time consuming) to write good software/make good products.