• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    12
    ·
    2 days ago

    Why would I trust a drill press when it can’t even cut a board in half?

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      edit-2
      2 days ago

      A drill press (or the inventors) don’t claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣

      LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

      If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

        I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

        It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.

        Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          “I’m not again LLMs I just never say anything useful about them and constantly point out how I can’t use them.” The other guy is right and you just prove his point.

          • wischi@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 days ago

            I don’t see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.

            LLMs are extremely knowledgeable (as in they “know” a lot) but are completely dumb.

            If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.

            So there is value in LLMs, if you use them for their knowledge.