95
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Jun 2024
95 points (100.0% liked)
Technology
37719 readers
103 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.
It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.
And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol
I think the issue is not wether it's sentient or not, it's how much agency you give it to control stuff.
Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn't be able to turn it off anymore without getting shot.
The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.
An atomic bomb doesn't pass a Turing test, but it's a fucking scary thing nonetheless.
Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It's just... perfect! Model degeneration is a lot like what happened with the Habsburg family's genetic pool.
When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don't think that the models are misbehaving, they're simply behaving as expected, and that any "improvement" in this regard is basically a band-aid being added to humans to a procedure that doesn't yield a lot of useful outputs to begin with.
And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it'll "magically" become smart. It won't, just like 70kg of bees won't "magically" think as well as a human being would. The underlying process is "dumb".
I am glad you liked it. Can’t take the credit for this one though, I first heard it from Ed Zitron in his podcast „Better Offline“. Highly recommend.
Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what's the point of using energy on useless tools. There's so many great things that AI is and can be used for, but of course like anything exploitable whatever is "for the people" is some amalgamation of extracting our dollars.
The funny part to me is that like mentioned "beautiful" AI cabins that are clearly fake -- there's this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that's too bad, I'm definitely guilty of aiming for "the perfect composition" but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.
The current state of marketed AI is selling the promise of perfection, something that's been getting sold for years already. Just now it's far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.
It really sucks being an optimist sometimes.
It could be only hype. But I don't entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.