• VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    8 days ago

    Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      Because you’re using it wrong.

      No, I think you mean to say it’s because you’re using it for the wrong use case.

      Well this tool has been marketed as if it would handle such use cases.

      I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

      I personally think image recognition is the best use case as it pretty much does what it promises.

    • Prandom_returns@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 days ago

      So for something you can’t objectively evaluate? Looking at Apple’s garbage generator, LLMs aren’t even good at summarising.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        17 hours ago

        For reference:

        AI chatbots unable to accurately summarise news, BBC finds

        the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. […] It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. […] 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

        It makes me remember I basically stopped using LLMs for any summarization after this exact thing happened to me. I realized that without reading the text, I wouldn’t be able to know whether the output has all the relevant info or if it has some made-up info.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I have it write for me emails in German. I moved there not too long ago, works wonders to get doctors appointment, car service, etc. I also have it explain the text, so I’m learning the language.

        I also use it as an alternative to internet search, which is now terrible. It’s not going to help you to find smg super location specific, but I can ask it to tell me without spoilers smg about a game/movie or list metacritic scores in a table, etc.

        It also works great in summarizing long texts.

        LLM is a tool, what matters is how you use it. It is stupid, it doesn’t think, it’s mostly hype to call it AI. But it definitely has it’s benefits.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        We have one that indexes all the wikis and GDocs and such at my work and it’s incredibly useful for answering questions like “who’s in charge of project 123?” or “what’s the latest update from team XYZ?”

        I even asked it to write my weekly update for MY team once and it did a fairly good job. The one thing I thought it had hallucinated turned out to be something I just hadn’t heard yet. So it was literally ahead of me at my own job.

        I get really tired of all the automatic hate over stupid bullshit like this OP. These tools have their uses. It’s very popular to shit on them. So congratulations for whatever agreeable comments your post gets. Anyway.

      • chiisana@lemmy.chiisana.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        Ask it for a second opinion on medical conditions.

        Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.

        Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.

        They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.

        • Wogi@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          8 days ago

          So can web MD. We didn’t need AI for that. Googling symptoms is a great way to just be dehydrated and suddenly think you’re in kidney failure.

          • chiisana@lemmy.chiisana.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            We didn’t stop trying to make faster, safer and more fuel efficient cars after Model T, even though it can get us from place A to place B just fine. We didn’t stop pushing for digital access to published content, even though we have physical libraries. Just because something satisfies a use case doesn’t mean we should stop advancing technology.

            • Wogi@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              We also didn’t make the model T suggest replacing the engine when the oil light comes on. Cars, as it happens, aren’t that great at self diagnosis, despite that technology being far simpler and further along than generative models are. I don’t trust the model to tell me what temperature to bake a cake at, I’m sure at hell not going to trust it with medical information. Googling symptoms was risky at best before. It’s a horror show now.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              8 days ago

              AI is slower and less efficient than the older search algorithms and is less accurate.

      • L3s@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”

        Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”

        Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.

        Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”

        Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”

        • Corngood@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online

          How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.

          If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.

          I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.

          • locuester@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Yes, people are using it as the least efficient communication protocol ever.

            One side asks an LLM to expand a summary into a fluff filled email, and the other side asks an LLM to reduce the long email to a summary.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          8 days ago

          The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.

          How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?

          Are you really saving time?

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              8 days ago

              If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.

        • lurch (he/him)@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          7 days ago

          it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.

          • L3s@lemmy.worldM
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 days ago

            My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.

      • chaosCruiser@futurology.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.

        Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT is also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.

        If you’re well familiar with the code you’ve working on, GPT code will be convoluted by comparison. If so, you can ask GPT for the rough alpha version, and you can do the debugging and refining in a few minutes.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          8 days ago

          That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.

          • TimeSquirrel@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            It can do that just fine, because it has seen enough examples of working code. It can’t directly count correctly, sure, but it can write “i++;”, incrementing a variable by one in a loop and returning the result. The computer running the generated program is going to be doing the counting.