• Teknikal@eviltoast.org
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    8 hours ago

    I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.

    • heavydust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask “how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?” They don’t know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.

      • shrugs@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 hour ago

        the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though

  • Turbonics@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    10 hours ago

    BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 hours ago

    I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.

    • Redex@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 hours ago

      This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.

  • underwire212@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    9 hours ago

    News station finds that AI is unable to perform the job of a news station

    🤔

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 hours ago

    But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

  • TroublesomeTalker@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    6
    ·
    12 hours ago

    But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    24 hours ago

    Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

    It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

    Introduced factual errors

    Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

    • chud37@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      that’s the core problem though, isn’t it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Well, “we” arent’ but there’s a hype machine in operation bigger than anything in history because a few tech bros think they’re going to rule the world.

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      13 hours ago

      I’ll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      14 hours ago

      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

      How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

      But I’d guess the AI is quite a bit better than, say, the average Republican.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 hours ago

        I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

        There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          I’m more interested in the technology itself, rather than its current application.

          I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!

          Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.

          • Balder@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 hours ago

            It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.

    • desktop_user@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      17 hours ago

      alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        If it doesn’t work then quick cheap and easy I’d pointless.

        I’ll make you dinner every night for free but one night a week it will make you ill. Maybe a little maybe a lot.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        It’s easy, it’s quick, and it’s free: pouring river water in your socks.
        Fortunately, there are other possible criteria.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        12 hours ago

        I don’t necessarily dislike “AI” but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

        Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

      • WagyuSneakers@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        8 hours ago

        I work in tech and can confirm the the vast majority of engineers “dislike ai” and are disillusioned with AI tools. Even ones that work on AI/ML tools. It’s fewer and fewer people the higher up the pay scale you go.

        There isn’t a single complex coding problem an AI can solve. If you don’t understand something and it helps you write it I’ll close the MR and delete your code since it’s worthless. You have to understand what you write. I do not care if it works. You have to understand every line.

        “But I use it just fine and I’m an…”

        Then you’re not an engineer and you shouldn’t have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.

        • Eheran@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 hours ago

          “I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer”

          You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.

          • WagyuSneakers@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            You can also just have an application designed to do that do it more accurately.

            If you can’t do that you’re not an engineer. If you don’t recommend that you’re not an engineer.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      18 hours ago

      Is it worse than the current system of editors making shitty click bait titles?

  • mentalNothing@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    1 day ago

    Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    1
    ·
    1 day ago

    As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

    • kboy101222@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included

      Sorry for being vague, I just didn’t want to post my home town on here

    • 1rre@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

      Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

      • db0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.

        Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.

    • kat@orbi.camp
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      1 day ago

      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

      • Atherel@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.

          • otp@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            10 hours ago

            Y’know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

            Do you just blindly believe whatever it tells you?

            It’s not absolutely perfect, so it’s useless.

            It’s all just garbage information!

            This is terrible for jobs, society, and the environment!

            • Eheran@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              6 hours ago

              You know what… now that you say it, it really is just like the anti-Wikipedia stuff.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 hours ago

            In which case you probably aren’t saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              9 hours ago

              Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.

              • WagyuSneakers@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                7 hours ago

                If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son’s preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.

                AI “works” because you’re asking questions you don’t know and it’s just putting words together so they make sense without regard to accuracy. It’s a hard limit of “AI” that we’ve hit. It won’t get better in our lifetimes.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    edit-2
    1 day ago

    What temperature and sampling settings? Which models?

    I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

    I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

    My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

    • jrs100000@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      19 hours ago

      They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

    • 1rre@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Gemini 1.5 used to be the best long context model around, by far.

        Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

        Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      I don’t think giving the temperature knob to end users is the answer.

      Turning it to max for max correctness and low creativity won’t work in an intuitive way.

      Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

      Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

      Not everyone is an engineer. Temp is an obtuse thing.

      But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

      I loathe how these things are advertised by Apple, Google and Microsoft.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago
        • Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.

        • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.

        • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which “inbreeds” the model.

        • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?

        What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

      • Eheran@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which few have really worked on.

          I don’t think the corporate APIs or UIs even do this. You are not wrong, but it’s just not done for some reason.

          It could be that the trainers don’t realize its an issue. For instance, “0.5-0.7” is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      1 day ago

      Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
        In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.

        In one fandom (the Avatar fandom), there used to be enthusiasm for a “community enhancement” of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don’t even mention the word “AI,” just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.