When I started angel investing in the late 1990s, a tech investment included a significant technology risk, with the potential upside being groundbreaking innovation. Being an investor at this time meant taking a considerable technology risk and betting on actual tech, such as nanotech, semiconductors or biotech.

E-commerce, albeit hyped and interesting, was not considered tech. It was “Business 2.0”, plain and straightforward, hype included.

  • JayleneSlide@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    And an additional response, because I didn’t fully answer your question. LLMs don’t reason. They traverse a data structure based on weightings relative to the occurrence frequency in their training content. Loosely speaking, it’s a graph (https://en.wikipedia.org/wiki/Graph_(abstract_data_type)). It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen unlike, say, a squirrel.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen

      This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.

      “Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.