(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

  • 0 Posts
  • 6 Comments
Joined 6 months ago
cake
Cake day: January 24th, 2025

help-circle

  • I understand. I grew up a fundamentalist Pentecostal. It’s taken a lot of time and growth to move past that, and I’ve been an ass and had to make up for it.

    My problem is mostly that celebrities have a lot of influence and power that they don’t treat with the proper level of respect. If you have an audience of millions, you should consider the example you set. It’s part of the price of choosing to be a celebrity as your job.

    This guy is responsible for contributing to a lot of cultural miasma- making up for that takes more effort than apologizing. It requires actual growth and an effort to make amends. You have to not just change, but try to fix the things you broke and help the people you hurt.

    A lot of celebrities will performatively apologize, but not do anything and that’s really annoying.





  • I’m not sure why so many people begin this argument on solid ground and then hurl themselves off into a void of semantics and assertions without any way of verification.

    Saying, “Oh it’s not intelligent because it doesn’t have senses,” shifts your argument to proving that’s a prerequisite.

    The problem is that LLM isn’t made to do cognition. It’s not made for analysis. It’s made to generate coherent human speech. It’s an incredible tool for doing that! Simply astounding, and an excellent example of the power of how a trained model can adapt to a task.

    It’s ridiculous that we managed to get a probabilistic software tool which generates natural language responses so well that we find it difficult to distinguish them from real human ones.

    …but it’s also an illusion with regards to consciousness and comprehension. An LLM can’t understand things for the same reason your toaster can’t heat up your can of soup. It’s not for that, but it presents an excellent illusion of doing so. Companies that are making these tools benefit from the fact that we anthropomorphize things, allowing them to straight up lie about what their programs can do because it takes real work to prove they can’t.

    Average customers will engage with LLM as if it was a doing a Google search, reading the various articles and then summarizing them, even though it’s actually just completing the prompt you provided. The proper way to respond to a question is an answer, so they always will unless a hard coded limit overrides that. There will never be a way to make a LLM that won’t create fictitious answers to questions because they can’t tell the difference between truth or fantasy. It’s all just a part of their training data on how to respond to people.

    I’ve gotten LLM to invent books, authors and citations when asking them to discuss historical topics with me. That’s not a sign of awareness, it’s proof that the model is doing what it’s intended to do- which is the problem, because it is being marketed as something that could replace search engines and online research.