288
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Jun 2024
288 points (100.0% liked)
Technology
37739 readers
942 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
The benefit of AI is overblown for a majority of product tiers. Remember how everything was supposed to be block chain? And metaverse? And web 3.0? And dot.com? This is just the next tech trend for dumb VCs to throw money at.
Yes, it's very hyped and being overused. Eventually the bullshit artists will move on to the next buzzword, though, and then there's plenty of tasks it is very good at where it will continue to grow.
Yeah, but the dot com bubble didnt kill the internet entirely, and the video game bubble that prompted nintendo to create its own quality seal of approval didnt kill video games entirely. This fad, when it dies, already has useful applications and when the bubble pops, those applications will survive
Except those things didn't really solve any problems. Well, dotcom did, but that actually changed our society.
AI isn't vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It's not going to just go away.
Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.
Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You're likely already experiencing it as a mundane tool you don't really think about.
If you're referring specifically to generative AI, that's still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That's not the perpetual "10 years away" it's been for the last 50 years, that's something that's actually happening in the near term. Jobs are already being lost.
People are scared about AI taking over because they recognize it (rightfully) as a threat. That's not because they're worthless. If that were the case you'd have nothing to fear.
ChatGPT didn't begin 18 months ago, the research that it originates from has been ongoing for years, how old is alexnet?
I'm referencing ChatGPT's initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it's still a short history with major gains (alexnet isn't really meaningfully related). These haven't been incremental changes on a slow and steady march to AI sometime in the scifi scale future.
AlexNet is related, it was the first use of consumer gpus to train neutral networks no?
No, not even remotely. And that's kind of like citing "the first program to run on a CPU" as the start of development for any new algorithm.
As far as I can find out, there was only one use of GPUs prior to alexnet for CNN, and it certainty didn't have the impact alexnet had. Besides, running this stuff on GPUs not CPUs is a relevant technological breakthrough, imagine how slow chayGPT would be running on a CPU. And it's not at all as obvious as it seems, most weather forecasts still run on CPU clusters despite them being obvious targets for GPUs.
What? Alexnet wasn't a breakthrough in that it used GPUs, it was a breakthrough for its depth and performance on image recognition benchmarks.
We knew GPUs could speed up neural networks in 2004. And I'm not sure that was even the first.
Okay, so some of the advances that chatGPT uses (consumer GPUs for training) are even older? 😁
Why stop there? The digital computer was introduced in 1942 and methods for solving linear equations were developed in the 1600s.
Blockchain is used in more places than you'd expect... not the P2P version, or the "cryptocurrency" version, just the "signature based chained list" one. For example, all signed Git commits, form a blockchain.
The Metaverse has been bubbling on and off for the last 30 years or so, each iteration it gets slightly better... but it keeps failing at the same points (I think I wrote about it 20+ years ago, with points which are still valid).
Web 3.0, not to be confused with Web3, is the Semantic Web, in the works for the last 20+ years. Web3 is a cool idea for a post-scarcity world, pretty useless right now.
Dot.com was the original Web bubble... and here we are, on the Web, post-bubble.