I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?
The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.
Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek
We've hit a wall in terms of progress with this technology. We've literally vacuumed up all the training data there is. What is left is improvements in efficiency (see DeepSeek).
LLMs are cool, they have their uses, but they have fundamental flaws as rational agents, and will never be fit for this purpose.
There's still a lot of room to grow in image, especially video, generation. The models still have room for optimization and we've seen tons of little improvements in stuff like text.
You could have said the same thing about smartphones 10-12 years ago, that we've hit a wall in the fundamentals and all that remains is improvements in efficiency, optimisation, speed and quality (compare the feature set of an iPhone 6 or Galaxy S4 to the latest phones, nothing has fundamentally changed), yet that didn't make smartphones disappear. In fact, it allowed them to effectively dominate the market.
Smartphones reached their current saturation about 10 years ago, and perhaps not coincidentally that's when they stopped improving. Can you honestly say that since 2015, cell phones in developed countries have gotten more common? At a time when people were already giving them to 10 year olds? Can you even say they've become more useful, when you could already browse social media, check the weather, apply for jobs, write documents, and order food to your door with them?
That's exactly my point. Nothing has fundamentally changed about smartphones in over a decade, yet that didn't make them go away, it made them more ubiquitous.
One, I said they are no more commonplace than they were ten years ago.
Two, I never said LLMs will go away. In fact I said they have their uses. But, and I will say this again in stronger terms: They are stupid, rote memorizers. Their fundamental flaw is that they cannot apply intelligent, rational thought to novel problems. Using them in situations that require rational thought is a mistake. This is an architectural flaw, not a problem of data. Large language models predict text, they cannot think. They can give an illusion of thought by aping a large body of text that itself demonstrates thought processes, but the moment a problem strays from the existing high quality data, the facade crumbles, it produces nonsense, and it is clear that there never was any thought in the first place. And now that we've scraped all the text there is, the body of problems LLMs can imitate the solution for has reached its greatest extent. GPT will never lead to a rational agent, no matter how much OpenAI and co say it will.