375
Saturday Morning Breakfast Cereal 2011 09 08
(lemmy.dbzer0.com)
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Ok i doubt anyone is going to be willing to have this discussion, but here i am. My assessment is as follows: it would seem to me that to be of value, "ai" doesnt need to be perfect, it just needs to be better than the average programmer. If it can produce the same quality code twice as fast, or if it can produce code thats twice as good in the same amount of time. If I want it to code me a video game, i would personally judge it by how well it does against what i would expect from human programmers. Currently there is no comparison, im no coding expert but even i find myself correcting ai for even the simplest of code, but thats only temporary. Ten years ago this tech didnt even exist, ten years from now(assuming it doesnt crash our economy in more ways than one) i would imagine the software will at least be comparable to an entry level programmer.
I guess what Im getting at is that people rail against ai for faults that a human would make worse. Like self driving cars, having seen human drivers i am definitely wanting that tech to work out. Obviously its ideal for it to be perfect and coordinate with other smart cars to reduce traffic loads and inprove safety for everyone. But as long as its safer than a human driver, i would prefer it. As long as it codes better than your average overworked unpaid programmer, it becomes a useful tool.
That being said, I do see tons of legitimate reasons to dislike AI, especially in its current form. A lot, id say most, of those issues dont actually lie with AI at all, or even with llms. Most of the issues ive heard with AI development are actually thinly veiled complaints about capitalism, which is objectively failing even without AI. The others are mostly complaints about the current state of the tech, which i find to be less valid. Its like complaining that your original ipod didnt have lidar built in like they do now. Nixing the capitalism issue about how this tech will be used, and how its currently being funded, and its environmental impacts, and the fact that this level of research is unsustainable and will collapse the economy, give the tech time and it will mature. That almost feels like sarcasm given those very real issues, but again, those are all capitlism issues. If we were serious about saving our planet, a guardian AI that automatically drone strikes sorices of intense pollution would go a long way. If youre worried about robots takin yer jerbs, try not being capitalism-pilled and realise that humans got by for eons without jobs or class structures. Post scarcity is almost mandatory under proper AI, and capitlism exists to ensure that post scarcity cant happen.
AI coding tools can do common, simple functions reasonably well, because there are lots of examples of those to steal from real programmers on the Internet. There is a large corpus of data to train with.
AI coding tools can't do sophisticated, specific-case solutions very well, because there aren't many examples of those for any given use case to steal from real programmers on the Internet. There is a small corpus of data to train with.
AI coding tools can't solve new problems at all, because there are no examples of those to steal from real programmers on the Internet. There is no corpus of data to train with.
AI coding tools have already ingested all of the code available on the Internet to train with. There is no more new data to feed in. AI coding tools will not get substantially better than they are now. All of the theft that could be committed has been committed, which is why the AI development companies are attempting to feed generated training material into their models. Every review of this shows that it makes the output from generative models worse rather than better.
Programming is not about writing code. That is what a manager thinks.
Programming is about solving problems. Generative AI doesn't think, so it cannot solve problems. All it can do is regurgitate material that it has previously ingested which is hopefully close-ish to the problem you're trying to solve at the moment - material which was written by a real thinking human that solved that problem (or a similar one) at some point in the past.
If you patronize a generative AI system like Claude Code, you are paying into, participating in, and complicit in, the largest example of labor theft in history.
Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to "solve" that use case by randomly trying stuff, and the other basically just says "youre not doing good enough and heres why". Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.
Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.
Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?
Unfortunately it seems that your education was missing the foundations of deep learning. PAC learning is the current meta-framework, it's been around for about four decades, and at its core is the idea that even the best learners are not guaranteed to learn the solution to a hard problem.
First, convince us that humans are actual problem solvers. The question is begged; we want computers to be intelligent but we didn't check whether humans were intelligent before deciding that we would learn intelligence from human-generated data.