US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • pjwestin@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 hours ago

    Maybe that’s because every time a new AI feature rolls out, the product it’s improving gets substantially worse.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 hours ago

      Maybe that’s because they’re using AI to replace people, and the AI does a worse job.

      Meanwhile, the people are also out of work.

      Lose - Lose.

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 minutes ago

        Even if you’re not “out of work”, your work becomes more chaotic and less fulfilling in the name of productivity.

        When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.

        A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.

        It’s just shit honestly. I’d rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.

      • pjwestin@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 minutes ago

        It didn’t even need to take someone’s job. A summary of an article or paper with hallucinated information isn’t replacing anyone, but it’s definitely making search results worse.

    • AvailableFill74@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      1 hour ago

      Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.

  • IndiBrony@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    5 hours ago

    The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?

    • alphabethunter@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.

  • dylanmorgan@slrpnk.net
    link
    fedilink
    English
    arrow-up
    36
    ·
    8 hours ago

    It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      21
      ·
      6 hours ago

      Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.

      • function IsOdd():@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 minutes ago

        Everyone gains from progress.

        It’s only true in the long-term. In the short-term (at least some) people do lose jobs, money, and stability unfortunately

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 hours ago

        being against progress for that reason is just stupid.

        Under the current economic model, being against progress is just self-preservation.

        Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.

      • Melobol@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        I’m not sure at this point. The sewing machine was just automated stitching. It is more similar to Photos and landscape painters, only it is worse.
        With the creative AI basically most of the visual art skills went to “I’m going to pay 100$ for AI to do this instead 20K and waiting 30 days for the project”. Soon doctors, therapists and teachers will look down the barrel. “Why pay for one therapy session for 150 or I can have an AI friend for 20 a month”.
        In the past you were able to train yourself to use sewing machine or learn how to operate cameras and develop photos. Now I don’t even have any idea where it goes.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Machine stitching is objectively worse than hand stitching, but… it’s good enough and so much more efficient, so that’s how things are done now; it has become the norm.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          7
          ·
          3 hours ago

          AI is changing the landscape of our society. It’s only “destroying” society if that’s your definition of change.

          But fact is, AI makes every aspect where it’s being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.

          Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.

          • GenosseFlosse@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            I use AI for programming questions, because it’s easier than digging 1h through official docs (if they exists) and frustrating trial and error.

            However quite often the ai answers are wrong by inserting nonsense code, using for instead of foreach or trying to access variables that are not always set.

            Yes it helps, but it’s usually only 60% right.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            AI makes every aspect where it’s being used a lot more productive and easier.

            AI makes every aspect where it’s being used well a lot more productive and easier.

            AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers’ time much more than any “productivity gained” on the producer side.

      • 7toed@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        And as someone who has extensively set up such systems on their home server… yeah it’s a great google home replacement, nothing more. It’s beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can’t even parse and match items from two lists. Despite my company trying its damn best to encourage “our own” (chatgpt enterprise) AI, nobody i have talked with has found a use.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.

          I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.

          A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          3 hours ago

          You’re using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I’m using it every day. And every colleague I talk to agrees 100%.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.

            I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.

            I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            Then you must know something the rest of us don’t. I’ve found it marginally useful, but it leads me down useless rabbit holes more than it helps.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.

              I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.

              As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.

          • 7toed@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            I’ll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that’s helpful, but I feel as though my normal search experience wasn’t so polluted with AI written regurgitation of the next result down, I would’ve found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.

            If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.

            Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.

              I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.

              If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:

              https://www.spiderrobinson.com/melancholyelephants.html

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    2
    ·
    12 hours ago

    If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.

      That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.

      The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.

      “Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.

    • faltryka@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      11 hours ago

      The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

      • ferb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        11 hours ago

        This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?

        • AceofSpades@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.

          Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Universal Base Income - it’s either that or just kill all the un-necessary poor people.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.

        For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.

        And if you run the AI locally, you can also be free of paying a subscription to a big AI company.

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            59 minutes ago

            No big employer… there are plenty of smaller companies who are open to do whatever works.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 hours ago

            Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.

    • count_dongulus@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      11 hours ago

      Mayne pedantic, but:

      Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 hours ago

        And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.

        The problems are systemic, not individual.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          55 minutes ago

          Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      55 minutes ago

      Al Gore’s family thought that the political tide was turning against it, so they gave up tobacco farming in the late 1980s - and focused on politics.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      12 hours ago

      More like asking the slaves about productivity advances in slavery. “Nothing good will come of this”.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          53 minutes ago

          The cotton gin has been used as an argument for why slavery finally became unacceptable. Until then society “needed” slaves to do the work, but with the cotton gin and other automations the costs of slavery started becoming higher than the value.

          • CarnivorousCouch@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            43 minutes ago

            My understanding is that the cotton gin led to more slavery as cotton production became more profitable. The machine could process cotton but not pick it, so more hands were needed for field work.

            Wiki:

            The invention of the cotton gin caused massive growth in the production of cotton in the United States, concentrated mostly in the South. Cotton production expanded from 750,000 bales in 1830 to 2.85 million bales in 1850. As a result, the region became even more dependent on plantations that used black slave labor, with plantation agriculture becoming the largest sector of its economy.[35] While it took a single laborer about ten hours to separate a single pound of fiber from the seeds, a team of two or three slaves using a cotton gin could produce around fifty pounds of cotton in just one day.[36] The number of slaves rose in concert with the increase in cotton production, increasing from around 700,000 in 1790 to around 3.2 million in 1850."

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    12 hours ago

    Depends on what we mean by “AI”.

    Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      51 minutes ago

      Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

      Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.

      The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        10 hours ago

        I considered this, and I think it depends mostly on ownership and means of production.

        Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

        Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          46 minutes ago

          It would still require a revolution.

          I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          47 minutes ago

          or just propped up with something like UBI.

          That depends entirely on how much UBI is provided.

          I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.

          Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna

        • FourWaveforms@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          It will only help us get there in the hands of individuals and collectives. It will not get us there, and will be used to the opposite effect, in the hands of the 1%.

  • Sibshops@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 hours ago

    No surprise there. We just went through how blockchain is going to drastically help our lives in some unspecified future.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

    The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

    That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

  • carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    12 hours ago

    Because it won’t. So far it’s only been used to replace people and cut costs. If it were used for what it was actually intended for then it’d be a different story.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      edit-2
      6 hours ago

      Replacing people is a good thing. It means less people do more work. It means progress. It means products and services will get cheaper and more available. The fact that people are being replaced means that AI actually has tremendous value for our society.

      • stardust@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.

        And it’s so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          3 hours ago

          This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can’t fight it. And it has brought us to where we are now. From cavemen to space explorers.

          • mriormro@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            29 minutes ago

            Oh hey, it’s the Nazi apologist. Big shock you don’t give a fuck about other people’s lives.

          • stardust@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 hours ago

            Which are separate things from people’s ability to financially support themselves.

            People can have smartphones and tech the past didn’t have, but be increasingly worse off financially and unable to afford housing.

            And you aren’t a space explorer.

            I’m not arguing about whether innovation is cool. It is.

            I however strongly disagree with your claim that people being replaced is good. That assumes society is being guided with altruism as a cornerstone of motivation to create some Star Trek future to free up people to pursue their interests, but that’s a fantasy. Innovation is simply innovation. It’s not about whether people’s lives will be improved. It doesn’t care.

            World can be the most technologically advanced its ever been with space travel for the masses and still be a totalitarian dystopia. People could be poorer than ever and become corpo slaves, but it would fit under the defition of societal progress because of innovation.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    12 hours ago

    AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.

  • artificialfish@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    Lol they get a capable chatbot that blows everything out of the water and suddenly they are like “yeah, this will be the last big thing”

    • pinball_wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      Every technology shift creates winners and losers.

      There’s already documented harm from algorithms making callous biased decisions that ruin people’s lives - an example is automated insurance claim rejections.

      We know that AI is going to bring algorithmic decisions into many new places where it can do harm. AI adoption is currently on track to get to those places well before the most important harm reduction solutions are mature.

      We should take care that we do not gaslight people who will be harmed by this trend, by telling them they are better off.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.

  • PunkRockSportsFan@fanaticus.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    10
    ·
    12 hours ago

    The amount of failed efforts the ruling class has made to corner ai shows me that it is a democratizing force.

    I reap benefits from it already.

    I can create local models with zero involvement from billionaires.

    It scares them more than us.

    And it should. It shows how evil they are. It’s objectively true. Ai knows it.

    • nadram@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      12 hours ago

      But you’re using these billionaires’ ai models are you not? Even if you use the free models they still benefit from your profile and query data

        • mesa@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          12 hours ago

          Yep you can run models without giving $$ to tech billionaires!

          Now we are giving it to the power billionaires! unless you own your own power sources.

            • mesa@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 hours ago

              Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!

              Models + moderate power should be what we strive for. I’m hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.

              Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.

                • mesa@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  6 hours ago

                  Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc… right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with multiprocessing.

                  But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          Uhm, I guess you missed the news when it was revealed that Deepseek had a little more backing than they claimed.

    • SeeMarkFly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      There is a BIG difference between what you can do and what you should do.

      We have ZERO understanding on the long term effects this new technology will have on our civilization.

      Why is everybody so eager to go “all in”?