204
submitted 1 year ago by yogthos@lemmy.ml to c/technology@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] theluddite@lemmy.ml 59 points 1 year ago

The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.

The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms -- basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?

All code now has to be written at least once. With ChatGPT, it doesn't even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We're going to have so much fucking code, and we have absolutely no idea how to deal with that.

[-] BloodyDeed@feddit.ch 12 points 1 year ago* (last edited 1 year ago)

This is so true. I feel like my main job as a senior software engineer is to keep the bloat low and delete unused code. Its very easy to write code - maintaining it and focusing on the important bits is hard.

This will be one of the biggest and most challenging problems Computer Science will have to solve in the coming years and decades.

[-] floofloof@lemmy.ca 6 points 1 year ago* (last edited 1 year ago)

It's easy and fun to write new code, and it wins management's respect. The harder work of maintaining and improving large code bases and data goes mostly unappreciated.

[-] space_comrade@hexbear.net 12 points 1 year ago

I don't think it's gonna go that way. In my experience the bigger the chunk of code you make it generate the more wrong it's gonna be, not just because it's a larger chunk of code, it's gonna be exponentially more wrong.

It's only good for generating small chunks of code at a time.

[-] FunkyStuff@hexbear.net 7 points 1 year ago

It won't be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I'm pretty sure there already are ways to get LLM's to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to "spam" everywhere so to speak. These things are way dumber than we give them credit for.

[-] space_comrade@hexbear.net 7 points 1 year ago* (last edited 1 year ago)

Oh that's definitely going to lead to some hilarious situations but I don't think we're gonna see a complete breakdown of the whole IT sector. There's no way companies/institutions that do really mission critical work (kernels, firmware, automotive/aerospace software, certain kinds of banking/finance software etc.) will let AI write that code any time soon. The rest of the stuff isn't really that important and isn't that big of a deal it if breaks for a few hours/days because the AI spazzed out.

[-] FunkyStuff@hexbear.net 3 points 1 year ago

Agreed, don't expect it to break absolutely everything but I expect that software development is going to get very hairy when you have to use whatever bloated mess AI is creating.

[-] space_comrade@hexbear.net 3 points 1 year ago

I'm here for it, it's already a complete shitshow, might as well go all the way.

load more comments (1 replies)
load more comments (1 replies)
[-] theluddite@lemmy.ml 2 points 1 year ago

Yes I agree. I meant the fundamental problem with the idea of LLMs doing more and more of our code, even if they get quite good.

[-] DefinitelyNotAPhone@hexbear.net 9 points 1 year ago

There's the other half of this problem, which is that the kind of code that LLMs are relatively good at pumping out with some degree of correctness are almost always the bits of code that aren't difficult to begin with. A sorting algorithm on command is nice, but if you're working on any kind of novel implementation then the hard bits are the business logic which in all likelihood has never been written before and is either sensitive information or just convoluted enough to make turning into a prompt difficult. You still have to have coders who understand architecture and converting requirements into raw logic to do that even with the LLMs.

[-] Antiwork@hexbear.net 6 points 1 year ago* (last edited 1 year ago)

obama-spike uhhhh let me code here

[-] AlexWIWA@lemmy.ml 6 points 1 year ago

Makes the Adeptus Mechanicus look like a realistic future. Really advanced tech, but no one knows how it works

[-] buh@hexbear.net 36 points 1 year ago

Every time I’ve used it to generate code, it invents frameworks that don’t exist

[-] just_browsing@reddthat.com 32 points 1 year ago
[-] aaaaaaadjsf@hexbear.net 6 points 1 year ago
[-] maynarkh@feddit.nl 15 points 1 year ago

Just like an average enterprise architect.

[-] CriticalResist8@hexbear.net 6 points 1 year ago

I've had some success with it if I'm giving it small tasks and describe in as much detail as possible. By design (from what I gather) it can only work on stuff it was able to use in training, which means the language needs to be documented extensively for it to work.

Stuff like Wordpress or MediaWiki code it does generally good at, actually helped me make the modules and templates I needed on mediawiki, but for both of those there's like a decade of forum posts, documentation, papers and other material that it could train with. Fun fact: in one specific problem (using a mediawiki template to display a different message whether you are logged in or not), it systematically gives me the same answer no matter how I ask. It's only after enough probing that GPT tells me because of cache issues, this is not possible lol. I figure someone must have asked about this same template somewhere and it's the only thing it can work off of from its training set to answer that question.

I also always double-check the code it gives me for any error or things that don't exist.

[-] s20@lemmy.ml 21 points 1 year ago

If I'm going to use AI for something, I want it to be right more often than I am, not just as often!

[-] space_comrade@hexbear.net 14 points 1 year ago

It actually doesn't have to be. For example the way I use Github Copilot is I give it a code snippet to generate and if it's wrong I just write a bit more code and the it usually gets it right after 2-3 iterations and it still saves me time.

The trick is you should be able to quickly determine if the code is what you want which means you need to have a bit of experience under your belt, so AI is pretty useless if not actively harmful for junior devs.

Overall it's a good tool if you can get your company to shell out $20 a month for it, not sure if I'd pay it out of my own pocket tho.

[-] s20@lemmy.ml 11 points 1 year ago

It... it was a joke. I was implying that 52% was better than me.

[-] space_comrade@hexbear.net 9 points 1 year ago

Ah ok I guess I misread that. My point is that by itself it's not gonna help you write either better or shittier code than you already do.

[-] jvisick@programming.dev 8 points 1 year ago

GitHub Copilot is just intellisense that can complete longer code blocks.

I’ve found that it can somewhat regularly predict a couple lines of code that generally resemble what I was going to type, but it very rarely gives me correct completions. By a fairly wide margin, I end up needing to correct a piece or two. To your point, it can absolutely be detrimental to juniors or new learners by introducing bugs that are sometimes nastily subtle. I also find it getting in the way only a bit less frequently than it helps.

I do recommend that experienced developers give it a shot because it has been a helpful tool. But to be clear - it’s really only a tool that helps me type faster. By no means does it help me produce better code, and I don’t ever see it full on replacing developers like the doomsayers like to preach. That being said, I think it’s $20 well spent for a company in that it easily saves more than $20 worth of time from my salary each month.

[-] Coolkidbozzy@hexbear.net 21 points 1 year ago
[-] SirGolan@lemmy.sdf.org 21 points 1 year ago* (last edited 1 year ago)

Wait a second here... I skimmed the paper and GitHub and didn't find an answer to a very important question: is this GPT3.5 or 4? There's a huge difference in code quality between the two and either they made a giant accidental omission or they are being intentionally misleading. Please correct me if I missed where they specified that. I'm assuming they were using GPT3.5, so yeah those results would be as expected. On the HumanEval benchmark, GPT4 gets 67% and that goes up to 90% with reflexion prompting. GPT3.5 gets 48.1%, which is exactly what this paper is saying. (source).

[-] floofloof@lemmy.ca 3 points 1 year ago

Whatever GitHub Copilot uses (the version with the chat feature), I don't find its code answers to be particularly accurate. Do we know which version that product uses?

[-] SirGolan@lemmy.sdf.org 4 points 1 year ago

If we are talking Copilot then that's not ChatGPT. But I agree it's ok. Like it can do simple things well but I go to GPT 4 for the hard stuff. (Or my own brain haha)

[-] Corkyskog@sh.itjust.works 3 points 1 year ago

Is GPT4 publicly available?

[-] SirGolan@lemmy.sdf.org 3 points 1 year ago

Yes available to anyone in the API or anyone who pays for ChatGPT subscription.

[-] newIdentity@sh.itjust.works 3 points 1 year ago

Yes... If you pay $20 a month

load more comments (1 replies)
[-] r00ty@kbin.life 15 points 1 year ago

I used ChatGPT once. It created non functional code. But, the general idea did help me get to where I wanted. Maybe it works better as a rubber duck substitute?

load more comments (11 replies)
[-] Afghaniscran@feddit.uk 14 points 1 year ago

I used it to code small things and it worked eventually whereas if I just decided to learn coding I'd be stuck cos I don't do computers, I do hvac.

[-] Fluffles@pawb.social 13 points 1 year ago

I believe this phenomenon is called "artificial hallucination". It's when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.

[-] yogthos@lemmy.ml 15 points 1 year ago

The fundamental problem is that at the end of the day it's just a glorified Markov chain. LLM doesn't have any actual understanding of what it produces in a human sense, it just knows that particular sets of tokens tend to go together in the data it's been trained on. GPT mechanic could very well be a useful building block for making learning systems, but a lot more work will need to be done before they can actually be said to understand anything in a meaningful way.

I suspect that to make a real AI we have to embody it in either a robot or a virtual avatar where it would learn to interact with its environment the way a child does. The AI has to build an internal representation of the physical world and its rules. Then we can teach it language using this common context where it would associate words with its understanding of the world. This kind of a shared context is essential for having AI understand things the way we do.

[-] FunkyStuff@hexbear.net 4 points 1 year ago

You have a pretty interesting idea that I hadn't heard elsewhere. Do you know if there's been any research to make an AI model learn that way?

In my own time while I've messed around with some ML stuff, I've heard of approaches where you try to get the model to accomplish progressively more complex tasks but in the same domain. For example, if you wanted to train a model to control an agent in a physics simulation to walk like a humanoid you'd have it learn to crawl first, like a real human. I guess for an AGI it makes sense that you would have it try to learn a model of the world across different domains like vision, or sound. Heck, since you can plug any kind of input to it you could have it process radio, infrared, whatever else. That way it could have a very complete model of the world.

[-] GBU_28@lemm.ee 2 points 1 year ago

People are working on this model vetts model, recursive action chain type stuff yea

load more comments (1 replies)
[-] v_krishna@lemmy.ml 3 points 1 year ago

A lot of semantic NLP tried this and it kind of worked but meanwhile statistical correlation won out. It turns out while humans consider semantic understanding to be really important it actually isn't required for an overwhelming majority of industry use cases. As a Kantian at heart (and an ML engineer by trade) it sucks to recognize this, but it seems like semantic conceptualization as an epiphenomenon emerging from statistical concurrence really might be the way that (at least artificial) intelligence works

[-] yogthos@lemmy.ml 3 points 1 year ago

I don't see the approaches as mutually exclusive. Statistical correlation can get you pretty far, but we're already seeing a lot of limitations with this approach when it comes to verifying correctness or having the algorithm explain how it came to a particular conclusion. In my view, this makes purely statistical approach inadequate for any situation where there is a specific result desired. For example, an autonomous vehicle has to drive on a road and correctly decide whether there are obstacles around it or not. Failing to do that correctly results in disastrous results and makes purely statistical approaches inherently unsafe.

I think things like GPT could be building blocks for systems that are trained to have semantic understanding. I think what it comes down to is simply training a statistical model against a physical environment until it adjusts its internal topology to create an internal model of the environment through experience. I don't expect that semantic conceptualization will simply appear out of feeding a bunch of random data into a GPT style system though.

[-] v_krishna@lemmy.ml 2 points 1 year ago

I fully agree with this, would have written something similar but was eating lunch when I made my former comment. I also think there's a big part of pragmatics that comes from embodiment that will become more and more important (and wish Merleau-Ponty was still around to hear what he thinks about this)

load more comments (3 replies)
[-] Pleonasm@programming.dev 9 points 1 year ago

I was pretty impressed with it the other day, it converted ~150 lines of Python to C pretty flawlessly. I then asked it to extend the program by adding a progress bar to the program and that segfaulted, but it was immediately able to discover the segfault and fix it when I mentioned. Probably would have taken me an hour or two to write myself and ChatGPT did it in 5 minutes.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 09 Aug 2023
204 points (95.1% liked)

Technology

34987 readers
393 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS