![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
In fact, Mexico could rename Texas to Nuevo Mexico just to show up in the parenthesis in the maps.
In fact, Mexico could rename Texas to Nuevo Mexico just to show up in the parenthesis in the maps.
It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.
That’s just a quirk of the English language. In Brazil we also call them “North Americans” instead of Americans, because Americans refer to all countries and peoples in the Americas.
I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.
There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.
For reference:
AI chatbots unable to accurately summarise news, BBC finds
the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. […] It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. […] 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
It makes me remember I basically stopped using LLMs for any summarization after this exact thing happened to me. I realized that without reading the text, I wouldn’t be able to know whether the output has all the relevant info or if it has some made-up info.
Just today I was watching this video:
deleted by creator
What I mean is that there’s a whole different world of how you make an app usable on a mobile phone with portrait screen and a website that’s displayed on a big screen. Many remaining forums I’ve seen from the past were built for a different time, with outdated designs and no good usability on a vertical-based screen.
Now, I’ve seen something line the Swift and Rust forums that do look good on mobile, simple and aesthetically pleasing.
About apps, they’re not necessary indeed, but for many services it’s an assurance that the usability was thought for that environment. For example, the only reason I do enjoy browsing Lemmy is because of the Voyager app that resemble the defunct Apollo for Reddit and copied all the good usability of it for iOS. If it wasn’t for the apps people built for Lemmy, I’d probably not have much drive to come back to it often.
They were good, but is there good forum platforms nowadays that are mobile friendly, have apps etc.?
I mean, this post makes no valid argument against JavaScript, there’s no benchmarks or anything aside from an opinion.
I don’t personally like webdev and don’t like to code in JavaScript, but there are good and bad web applications out there, just like any software.
A single page can send out hundreds or even thousands of API requests just to load, eating up CPU and RAM.
The author seems to know the real problem, so I don’t know why they’re blaming it on JavaScript.
Don’t let Trump find out this State exists.