• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • no one is talking about NPM libraries. we’re talking about released packages. you absolutely can ensure a binary hasnt been tampered with. its called checksumming.

    I just took NPM as an example of code that was trusted doing shady things. And I know what checksums are and how they work. What I meant is that the developer providing you with the checksum has put in malicious code in the binary. You don’t know. (I don’t think that is very likley but it all boils down to trust.)

    you’re confusing MITM attacks with supply chain attacks. MITM attacks are far easier to pull off.

    No, I don’t think I am?

    Yes. thats precisely the problem we’re pointing out to you.

    And I am saying that it is not that big of a problem.


  • How do you know the script hasnt been compromised?

    You don’t, same as you don’t know if the binary has been compromised, just like when a npm package deleted files for russian users. I get that running scripts from the internet without looking at them first to understand what they do is not secure, but downloading and running anything from the internet is coupled with some amount of risk. How do you know that you won’t be mining crypto currency in addition to the original purpose of the binary? You don’t unless you read the source code.

    It all comes down to if you trust the provider or not. Personally, if I trust them enough to run binary files on my computer, I trust them enough to use their scripts for installation. I don’t agree that something is more unsafe just because it is a script.

    package manager

    Not everything is provided with a package manager, and not everything is up to update with the OS provided package manager. I agree that one should ideally use a package manager with third party validation if that is an option.




  • I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.

    All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.

    Have you actually found that to be the case in anything complex though?

    I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.

    If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

    I agree with you on that.




  • I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

    While the requirements never changed, the tools sure did and they made it a lot easier to not understand.