Bonus issue:

This one is a little bit less obvious

    • coherent_domain@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      28 days ago

      My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

      I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.

    • Tolookah@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      Oh, I can help! 🎉

      1. computers like lists, they organize things.
      2. itemized things are better when linked! 🔗
      3. I hate myself a little for writing this out 😐
  • FQQD! @lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    29 days ago

    Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have any words for this.

  • Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.