They’re two different tools with different purposes, so why treat one like it can replace the other?

  • mapumbaa@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Because it’s useful. Have you tried? But the LLM has to be able to use conventional search engines as well. I tell my LLM agent to prioritize certain kinds of websites and present a compressed answer with references. Usually works way better than a standard Google search (which only produce AI generated junk results anyway).

    You can get very good answers or search results by utilizing RAG.

    • ALostInquirer@lemm.eeOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I’ve not used retrieval augmented generation as far as I’m aware, so my reference point is what’s been pushed to the masses so far (dunno if any of it incorporates RAG, correct me if I’m mistaken).

      Looking it up I can see how it may mitigate some issues, however I still don’t have much confidence that this is a wise application since at base it’s still generative text. What I’ve tried so far has reinforced this view, as it’s not served as a good research aid.

      Anything it has generated for me has typically been superficial, i.e. info I can easily find on my own, because it’s on the sites right there in the first page of search results. In other situations the source articles cited seem not to exist, as attempts to verify them turn up nothing.

      • mapumbaa@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        My only advice is that you stick to one model and give it some time. You need to “learn” your model. I will probably never go back. I had huge issues with getting good results from Google and my subjective experience is that this is far better. However, I do still use conventional search engines as a complement. It’s not all or nothing.

    • TryingSomethingNew@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Can you please share a simple prompt? I’ve heard of RAG, but was unaware how you could use it in this case. Is this something you can only do with a local LLM? Or can you plug this into GitHub copilot or the like?