387
submitted 5 months ago by sag@lemm.ee to c/comicstrips@lemmy.world

Source: Webtoon - RSS

you are viewing a single comment's thread
view the rest of the comments
[-] Tar_alcaran@sh.itjust.works 62 points 5 months ago

People really should remember: generative AI makes things things that look like what you want.

Now, usually that overlaps a lot with what you actually want, but not nearly always, and especially not when details matter.

[-] FaceDeer@fedia.io 21 points 5 months ago

It also isn't telepathic, so the only thing it has to go on when determining "what you want" is what you tell it you want.

I often see people gripe about how ChatGPT's essay writing style is mediocre and always sounds the same, for example. But that's what you get when you just tell ChatGPT "write me an essay about X." It doesn't know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.

[-] gbuttersnaps@programming.dev 12 points 5 months ago

Not disagreeing with you at all, you made a pretty good point. But when engineering the prompt takes 80% of the effort that just writing the essay (or code for that matter) would take, I think most people would rather write it themselves.

[-] FaceDeer@fedia.io 1 points 5 months ago

Sure, in those situations. I find that it doesn't take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don't put in any effort, get a bad result, and conclude "this tech is useless."

[-] slazer2au@lemmy.world 9 points 5 months ago

We are all annoyed at clients for not saying what they actually want in a Scope of Works, yet we do the same to LLM thinking it will fill in the blanks how we want it filled in.

[-] takeda@lemmy.world 7 points 5 months ago

Yet that's usually enough when taking to another developer.

The problem is that we have this unambiguous language that is understood by human and a computer to tell computer exactly what we want to do.

With LLM we instead opt to use a natural language that is imprecise and full of ambiguity to do the same.

[-] FaceDeer@fedia.io 0 points 5 months ago

You communicate with co-workers using natural languages but that doesn't make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.

[-] Xanis@lemmy.world 5 points 5 months ago

I treat AI the same way I've always treated Google: ~~WITH ABSOLUTE DISDAIN~~ Using them as a shove in the right direction and for research purposes to supplement research already being done. ChatGPT for instance is actually pretty decent at figuring out vaguely defined things if worked through. Is it perfect? Hell no. It can help narrow down the options though.

[-] xantoxis@lemmy.world 3 points 5 months ago* (last edited 5 months ago)

I'm pretty anti-AI but even I'll cop to this one. ChatGPT is good at figuring out what you're trying to describe. Know you need a particular networking concept? Describe it a bit to ChatGPT and ask for some concepts that are similar, and the thing you're looking for will probably be in the list.

Looking for a particular library that you assume must exist even though you've never seen it? ChatGPT can give you that.

You're on your own after that, but it can actually save you a bit of research time.

The problem is this: it's sure it has the answer 100% of the time, but about 30% of the time it gives you a list of nothing but wrong answers and you can go off in the wrong direction as a result.

[-] tal@lemmy.today 1 points 5 months ago

Yeah.

I'm willing to believe that we can have solid AI software authoring, but I am skeptical that it's gonna be via the raw LLM stuff being done with images and audio and such, where what matters is stuff that looks like other stuff.

Maybe you could use LLMs as a component of a larger system that does effective coding. But I'm skeptical that this alone can be a great solution.

Maybe in very limited situations where the system can reliably validate the code correctness itself. Like, say you want to write a quine. That doesn't take input, and the output is trivial to validate.

But for most software, I'd say that it's not easy for a computer to validate that code is correct.

And in some cases, trying to validate code has got to be worse than doing it yourself. Like, think of multithreaded code absent some sort of elaborate type system that permits fully specifying the constraints imposed by the parallelism requirements, and where such constraints are written and available to you. C or C++ doesn't have such a type system.

Or writing security-sensitive code. Same thing -- absent some kind of type system that permits fully-specifying the requirements of the problem, you can't automatically validate it, and trying to review code to understand whether it's secure...ugh.

I can maybe see some kind of "grammar check", having an LLM looking for code that you wrote that has a portion that is unusual compared to existing code that it's seen.

Programming is basically translation from a list of (precise) requirements to code in a programming language. And LLMs can do translation of human language pretty well. But I expect that a major problem for LLM-driven programming is that there's no training corpus for the requirements, the "source language" for translation.

this post was submitted on 27 Jun 2024
387 points (96.6% liked)

Comic Strips

12655 readers
1512 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS