ChatGPT is dismissing it, but I’m not so sure.

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    6
    ·
    6 hours ago

    Because it’s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You don’t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but it’s often a good first step.

    And no I’m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.

    • richmondez@lemdro.id
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      5 hours ago

      But once you have it’s output, unless you already know enough to judge if it’s correct or not you have to fall back to doing all those things you used the AI to avoid in order to verify what it told you.

      • traches@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 hours ago

        Sure, but you at least have something to work with rather than whatever you know off the top of your head

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 hours ago

      It is not a search box. It generates words we know are confidently wrong quite often.

      “Asking” gpt is like asking a magic 8 ball; it’s fun, but it has zero meaning.

      • traches@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 hours ago

        Well that’s just blatantly false. They’re extremely useful for the initial stage of research when you’re not really sure where to begin or what to even look for. When you don’t know what you should read or even what the correct terminology is surrounding your problem. They’re “Language models”, which mean they’re halfway decent at working with language.

        They’re noisy, lying plaigarism machines that have created a whole pandora’s box full of problems and are being shoved in many places where they don’t belong. That doesn’t make them useless in all circumstances.

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 minutes ago

          Not false, and shame on you for suggesting it.

          I not only disagree, but sincerely hope you aren’t encouraging anyone to look up information using an LLM.

          LLMs are toys right now.