• Arigion@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Me too. Who would think you would use something, noone would use. Deception is key!

    • lurch (he/him)@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      even the AI would have suggested a better one. (don’t use passwords AI generated tho, because someone may be able to narrow down or recreate tge output one day.)

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        i mean it is literally a machine built to produce statistically likely text.

        • huppakee@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Theoretically that could mean it also knows what is statistically unlikely, but it will only tell you what is statistically the most likely statistically unlikely answer.

  • proper@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    2 days ago

    On Wednesday, security researchers Ian Carroll and Sam Curry revealed that they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald’s website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with a long track record of independent security testing, discovered that simple web-based vulnerabilities—including guessing one laughably weak password—allowed them to access a Paradox.ai account and query the company’s databases that held every McHire user’s chats with Olivia. The data appears to include as many as 64 million records, including applicants’ names, email addresses, and phone numbers.

    The outlets headline tries make it sound like “scary hackers.”