20
top 12 comments
sorted by: hot top controversial new old
[-] PenguinTD@lemmy.ca 15 points 1 year ago

I didn't watch, someone please give a TLDW. Cause I highly suspect this is gonna just be wasting time.

[-] Melody@lemmy.one 22 points 1 year ago

TL;DR: It suggests several methods and makes a few mistakes which he had to point out to which it suggests even more absurd solutions to.

The AI recommends doing things in long and hard ways and does not conceive of new or novel technologies; it just mashes together existing ones despite their implementation being difficult or impossible by simply waving away these issues by saying things like "Much research and development would be needed but..."

[-] PenguinTD@lemmy.ca 26 points 1 year ago

so similar to say, a redditor trying to sound smart by googling and debating another while both has no qualification on that topic, got it.

[-] Laneus@beehaw.org 7 points 1 year ago

I wonder how much of that is just an inherent part of how neural networks behave, or if LLMs only do it because they learned it from humans.

[-] Kata1yst@kbin.social 5 points 1 year ago

More the latter. Neural networks have been used in biomed for about a decade now fairly successfully. Look into their use of genetic algorithms, where we are effectively using the power of evolution to discover new therapies, in many cases even new uses for existing (approved) drugs.

But ChatGPT has no way to test or improve any "designs", it simply uses existing indexed data to infer what you want to hear as best it can. The goal is to sound smart, not be smart.

[-] Hexorg@beehaw.org 2 points 1 year ago

That's actually a decently good analogy, though a random redditor is still smarter than ChatGPT because they can actually analyze google results, not just match situations and put them together.

[-] upstream@beehaw.org 5 points 1 year ago

ChatGPT is, despite popular consensus, not an AI.

It’s a system that has some notion of context and a huge database of information and is really good at guessing what words to put on screen based on the provided input.

It can’t think of anything new or novel, but can generate “new” output based on multiple sources of data.

As such, it will never be able to design a fusion reactor, unless it’s been trained on input from someone who actually did.

And even then it’s likely to screw it up.

[-] manitcor@lemmy.intai.tech 4 points 1 year ago

its interesting but it tells us what we already know about subjects with material that is incomplete.

[-] rthmchgs@lemmynsfw.com 3 points 1 year ago

He calls it Chat GTP, that's where I stopped.

[-] jordanlund@lemmy.one 7 points 1 year ago

tl;dr - If AI doesn't directly try to kill us, it may try to trick us into killing ourselves by building it's ideas.

[-] manitcor@lemmy.intai.tech 4 points 1 year ago

aside from misinfo this is more of why they want to moderate some responses, someone is going to blow themselves up using a recipe it gives them

[-] don@lemm.ee 2 points 1 year ago

Coulda skipped right past nuclear and told it to design a fusion reactor, but since it can’t get the nuclear reactor right, may as well nevermind.

load more comments
view more: next ›
this post was submitted on 17 Jul 2023
20 points (100.0% liked)

Technology

37717 readers
401 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS