queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 2 Posts
  • 647 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle
  • What is the reason you think philosophy of the mind exists as a field of study?

    In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.

    What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.











  • The Federal government is going to draw up and execute its budget as if they had the funds anyway.

    That’s what I was getting at when I pointed out the fact that the feds print their own money. The federal government would be fine.

    I was just running the thought experiment of “would this even do anything” under the assumption that it was successful. And it’s a fantasy on its face, even in the best case scenario it accomplishes nothing.

    And the best case isn’t what would happen, because ultimately you’re right and this won’t be allowed. There are a lot of mechanisms the federal government has to compell the states to pay their dues.






  • My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.

    We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.





  • So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    What? No.

    Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.