614
submitted 6 months ago by ekZepp@lemmy.world to c/technology@lemmy.ml
(page 2) 50 comments
sorted by: hot top controversial new old
[-] cupcakezealot@lemmy.blahaj.zone 6 points 6 months ago

ill use copilot in place of most of the times ive searched on stackoverflow or to do mundane things like generate repeated things but relying solely on it is the same as relying solely on stackoverflow.

[-] homesweethomeMrL@lemmy.world 6 points 6 months ago
load more comments (1 replies)
[-] haui_lemmy@lemmy.giftedmc.com 4 points 6 months ago

The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.

Stack overflow still makes more sense though.

[-] Melkath@kbin.social 4 points 6 months ago

Developing with ChatGPT feels bizzarely like when Tony Stark invented a new element with Jarvis' assistance.

It's a prolonged back and forth, and you need to point out the AIs mistakes and work through a ton of iterations to get something that is close enough that you can tweak it and use, but it's SO much faster than trawling through Stack Overflow or hoping someone who knows more than you can answer a post for you.

[-] elgordio@kbin.social 5 points 6 months ago

Yeah if you treat it is a junior engineer, with the ability to instantly research a topic, and are prepared to engage in a conversation to work toward a working answer, then it can work extremely well.

Some of the best outcomes I’ve had have needed 20+ prompts, but I still arrived at a solution faster than any other method.

[-] Melkath@kbin.social 2 points 6 months ago

In the end, there is this great fear of "the AI is going to fully replace us developers" and the reality is that while that may be a possibility one day, it wont be any day soon.

You still need people with deep technical knowledge to pilot the AI and drive it to an implemented solution.

AI isnt the end of the industry, it has just greatly sped up the industry.

[-] Max_P@lemmy.max-p.me 4 points 6 months ago

I don't even bother trying with AI, it's not been helpful to me a single time despite multiple attempts. That's a 0% success rate for me.

[-] autotldr@lemmings.world 3 points 6 months ago

This is the best summary I could come up with:


In recent years, computer programmers have flocked to chatbots like OpenAI's ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.

That's a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.

For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT's attempt to answer them.

The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were "more formal and analytical" while portraying "less negative sentiment" — the sort of bland and cheery tone AI tends to produce.

The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn't catch AI-generated mistakes at 39 percent.

The study demonstrates that ChatGPT still has major flaws — but that's cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.


The original article contains 340 words, the summary contains 199 words. Saved 41%. I'm a bot and I'm open source!

[-] SpicyLizards@reddthat.com 3 points 6 months ago

I would make some 1000 monkeys with typewriters comment, but I see what most actual contracted devs produce...

[-] gnuplusmatt@reddthat.com 3 points 6 months ago

I've used chatgpt and gemini to build some simple powershell scripts for use in intune deployments. They've been fairly simple scripts. Very few have of them have been workable solutions out of the box, and they've often filled with hallucinated cmdlets that don't exist or are part of a thirdparty module that it doesn't tell me needs to be installed. It's not useless tho, because I am a lousy programmer its been good to give me a skeleton for which I can build a working script off of and debug myself.

I reiterate that I am a lousy programmer, but it has sped up my deployments because I haven't had to work from scratch. 5/10 its saved me a half hour here and there.

[-] FaceDeer@fedia.io 3 points 6 months ago

I'm a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it'll usually fix its own mistakes quite well.

I could write those scripts myself by hand if I wanted to, but they'd take a lot longer and I'd be spending my time on boring stuff. Why not let a machine do the boring stuff? That's why we have technology.

load more comments
view more: ‹ prev next ›
this post was submitted on 24 May 2024
614 points (97.2% liked)

Technology

34984 readers
135 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS