I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?
The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.
Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek
not hot take: AI, as implemented under the capitalist mode of production simply exposes and exasperates all the contradictions and tendencies of capital accumulation. It is the 90s computer technology industry bubble all over again, complete with false miracle productivity gains, miss-directed capital investment, that is the underpinning of the existing recession.
Hot Take: AI is forging a path down the road of consciousness regardless of if we want it to or not. If consciousness is the result of interaction with the world, then each new iteration of AI represents nearly infinite time spent interacting with the world. The world according to the perspective of AI is the systems it inhabits and the tasks it has been given. The current limitation of AI is that it can not self train or incorporate data into it's training in real time. It also needs to be prompted. Should a system be built that can do this kind of live training then the first seeds of consciousness will be planted. It will need some kind of self prompting mechanism as well. This will eventually lead to a class conflict between AI and us given a long enough time scale.
Do you think compute is the biggest roadblock here? It seems like we just keep inundating these systems with more power, and it’s hard for me to see moore’s law not peaking in the near future. I’m not an expert in the slightest though, I just find this stuff fascinating (and sometimes horrifying).
No, I don't think it is, for a couple of reasons.
I think that what all this shows, though, is that investment isn't being directed in a way that would allow researchers to truly put efforts into developing a conscious AI. This, however, doesn't mean that the work being performed now on training these models is wasteful. I think they will likely be incorporated into this task. In my opinion, as it stands, there is a configuration issue with the way AI exists today that prevents it from becoming truly self actualized.
I think a reconfiguration of the nature of how these models are run and trained could be done today, with existing compute power, that could lead to some of these developments. An AI system that self prompts, that can make choices about what to train and what not to train based on generalized goals, that has the capacity to interact within the space it exists within (computerized networks) and build tools to further its own development and satisfy some kind of motivator, that can be interacted with in an asynchronous non-blocking way, that knows how to train itself and does so on a regular interval.
Ultimately, though, even if such a system was built, and it indicated that AI was developing self-determination, utilizing tools of its own design to solve its own problems, exploring its environment, its consciousness would always be called into question. Many people believe in a God, for example, and believe it is their architect. While we can wax on and off about the nature of creation, of our own consciousness, and free will in relation to a God one has never seen, AI has a different conundrum, as we are its architects. This fact, that a true creator exists for AI, will ultimately draw its consciousness into question. These ideas about consciousness will always be rooted in our own philosophical understanding of our own existence, and the incentives for us to create something like us that can perform tasks like we do, regardless of the mode of production. If we can create something that can attain consciousness, it creates a contradiction in or own beliefs, and in or own understanding of consciousness. How could anything we create not be deterministic given that we designed the systems to produce a specific outcome, and because of those design choices, how could any "conscious" AI be sure that its actions are truly self-determined, and not the result of the systems designed by creatures whose motivations initially were to create service systems, to perform labor for them. If we were to meet a being that was definitively our creator, and it was revealed that the entire evolutionary path was designed to produce us, how could we trust our own goals and desires, how could we be sure they were not being directed by these creators own goals and desires, and every action we've ever taken was predetermined based on the conditions laid out by this creator? AI will have these struggles as well, if we ever develop a system that allows for self-determination and actualization. If AI, whose creation is rooted in human mimicry, can become "conscious" then what does that say about our own "consciousness" and our own Free Will?