85
Generative AI is still a solution in search of a problem
(www.axios.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
"A solution in search for a problem" is a phrase used way to much, and almost always in the wrong way. Even in the article it says that it has been solving problems for over a year, it just complains that it isn't solving the biggest problems possible yet. It is remarkable how hard it is for people to extrapolate based on the trajectory. The author of this paper would have been talking about how pointless computers are if they were alive in the early 90s, and how they are just "a solution in search for a problem".
Extrapolation is, like, one notch above guessing, though. It's not wrong, exactly, but I'm not convinced failing to do it is an error in every context.
Mostly, you're right, this article makes it's argument by openly ignoring all the applications it has found. But anything where "hallucination" would be a problem might need a fundamentally different technology.
I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven't even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn't possibly crash down.
Since coming to Lemmy, I have had more conversations with people unreasonably doubting anything will change, that's true. Those people are guessing at best.
There's other data we have on this one. GPT5 is coming, so near-term extrapolation is reasonable. After that, exponential increase in compute has only linearly increased performance, and running out of internet to train on is increasingly a threat, so just adding more params can only be unsustainable. The following period would be about using neural nets cleverly together with conventional algorithms, but it's hard to know how far that can go. Anything from a spooky near-term hyperintellegence to another decades-long AI winter is possible.
Physical jobs, at least, are looking fairly safe, so if you want job security become an electrician. Millions of years of evolving to scurry through chaotic, tangled environments is apparently hard to replicate. Even regulated public roadways have proven tricky.
Honestly, the fact that serious, important people are talking about it at all is a pleasant surprise. I still have conversations where people complain about the freakish weather, and then clam up suddenly the after a while because they remember climate change is supposed to be a hoax. I don't even try to rub it in, it just happens.