31

I'm trying to feel more comfortable using random GitHub projects, basically.

top 32 comments
sorted by: hot top controversial new old
[-] TootSweet@lemmy.world 31 points 2 months ago* (last edited 2 months ago)

I don't think "AI" is going to add anything (positive) to such a use case. And if you remove "AI" as a requirement, you'll probably get more promising candidates than if you restrict yourself to "AI" (whatever that means) solutions.

[-] MajorHavoc@programming.dev 16 points 2 months ago

Privado CLI will produce a list of data exfilration points in the code.

If the JSON output file points out a bunch of endpoints you don't recognize from the README, then I wouldn't trust the project.

Privado likely won't catch a malicious binary file, but your local PC antivirus likely will.

[-] moonpiedumplings@programming.dev 13 points 2 months ago

The solution to what you want is not to analyze the code projects automagically, but rather to run them in a container/virtual machine. Running them in an environment which restricts what they can access limits the harm an intentional


or accidental bug can do.

There is no way to automatically analyze code for malice, or bugs with 100% reliability.

[-] unknowing8343@discuss.tchncs.de 4 points 2 months ago

Of course, 100% reliability is impossible even with human reviewers. I just want a tool that gives me at least something, cause I don't have the time or knowledge to review a full repo before executing it on my machine.

[-] FizzyOrange@programming.dev 1 points 2 months ago

That is another tool you can use to reduce the risk of malicious code, but it isn't perfect, so using sandboxing doesn't mean you can forget about all other security tools.

There is no way to automatically analyze code for malice, or bugs with 100% reliability.

He wasn't asking for 100% reliability. 100% and 0% are not the only possibilities.

[-] thingsiplay@beehaw.org 2 points 2 months ago

Not exactly what you asked, but related; roast your Github profile: https://github-roast.pages.dev/

[-] Kissaki@programming.dev 3 points 2 months ago

How is that related? I don't see it.

[-] thingsiplay@beehaw.org 1 points 2 months ago

It's an AI tool analyzing a Git repo.

[-] Kissaki@programming.dev 1 points 2 months ago

It doesn't analyze only one repo

[-] slazer2au@lemmy.world 2 points 2 months ago

What do you consider malicious, specifically. Because AI are not magic boxes, they are regurgitation machines prone to hallucinations. You need to train it on examples to identify what you want from it.

[-] unknowing8343@discuss.tchncs.de 4 points 2 months ago

I just want a report that says "we detected in line 27 or file X, a particular behavior that feels weird as it tries to upload your environment variables into some unexpected URL".

[-] slazer2au@lemmy.world 1 points 2 months ago

particular behavior that feels weird

Yea, AI doesn't do feelings.

tries to upload your environment variables into some unexpected URL

Most of the time that is obfuscated and can't be detected as part of a code review. It only shows up in dynamic analysis.

[-] unknowing8343@discuss.tchncs.de 3 points 2 months ago

AI doesn't do feelings

How can I have a serious conversation with these annoying answers? Come on, you know what I am talking about. Even an AI chatbot would know what I mean.

Any AI chatbot, even "general purpose" ones will read your code and will return a description of what it does if you ask it.

And particularly AI would be great at catching "useless", "weird" or unexplainable code in a repository. Maybe not with the current levels of context. But that's what I want to know, if these tools (or anything similar) exist yet.

Thank you.

[-] FizzyOrange@programming.dev 0 points 2 months ago

Questions about AI seem to always bring out these naysayers. I can only assume they feel threatened? You see the same tedious fallacies again and again:

  • AI can't "think" (using some arbitrary and unstated definition of the word "think" that just so happens to exclude AI by definition).
  • They're stochastic parrots and can only reproduce things they've seen in their training set (despite copious evidence to the contrary).
  • They're just "next word predictors" so they fundamentally are incapable of doing X (where X is a thing they have already done).
[-] FizzyOrange@programming.dev -5 points 2 months ago

AI doesn’t do feelings

It absolutely does. I don't know where you got that weird idea.

[-] superb@lemmy.blahaj.zone 1 points 2 months ago

Honey your AI girlfriend doesn’t actually love you

[-] FizzyOrange@programming.dev 0 points 2 months ago
[-] superb@lemmy.blahaj.zone 1 points 2 months ago

You’re right, I hope the two of you are very happy

[-] TootSweet@lemmy.world 2 points 1 month ago

This absolutely sent me.

[-] anzo@programming.dev 1 points 2 months ago

Perhaps snyk.io I used it in the past, but I didn't find it quite useful. Now I have a github action to upgrade dependencies every week. But you want some kind of scanner to be more involved on the actual codebase. Did you look into https://github.com/marketplace?query=security ? That's what I would do. But I never heard of any of those listed there. Let us know your findings after some time if you test 'em ;) good luck!

this post was submitted on 31 Aug 2024
31 points (73.8% liked)

Programming

17314 readers
130 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS