32

I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?

The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.

Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek

you are viewing a single comment's thread
view the rest of the comments
[-] Hohsia@hexbear.net 2 points 1 day ago

The current limitation of AI is that it can not self train or incorporate data into it's training in real time

Do you think compute is the biggest roadblock here? It seems like we just keep inundating these systems with more power, and it’s hard for me to see moore’s law not peaking in the near future. I’m not an expert in the slightest though, I just find this stuff fascinating (and sometimes horrifying).

[-] RedWizard@hexbear.net 1 points 19 hours ago

No, I don't think it is, for a couple of reasons.

  1. Under capitalism, the true profit generator is the down stream sectors attached to AI. Power generation, cooling solutions, GPUs, data centers. If the model is less efficient, then that's good because it makes the numbers go up for all the downstream production. It just needs to be an impressive toy for consumers. It justifies building new datacenters, designing new GPUs, developing cooling solutions, and investing in nuclear power generation for exclusive datacenter use.
  2. Deepseek showed that the approach to current AI trends can be optimized. It was an outgrowth of working with less powerful hardware. Many attribute this to China's socialist model. However, regardless of the optimization, Deepseek is still competing against the capitalist formation of AI. So it is engaged in mimicry. They offer a similar solution to western counterparts, at a cheaper rate both for the company and the consumer, but what problem AI solves currently is unclear. The capitalist form of AI is a solution looking for a problem, or more accurately a solution to generating profits in the digital space, since it's inefficiency drives growth in related sectors. The capitalist AI scheme echoes the 90s in its idealism, it is a bubble that is slowly deflating.
  3. Regardless of the economics of AI and how that shapes its construction, there are still very interesting things happening in the space. The reasoning ability of R1 is still impressive. The concept of AI agents, who at times spawn other AI agents to complete tasks, could have high utility. There are people who are attempting to create a generalized model of AI, which they believe could become the dominate form of AI down the line.

I think that what all this shows, though, is that investment isn't being directed in a way that would allow researchers to truly put efforts into developing a conscious AI. This, however, doesn't mean that the work being performed now on training these models is wasteful. I think they will likely be incorporated into this task. In my opinion, as it stands, there is a configuration issue with the way AI exists today that prevents it from becoming truly self actualized.

  1. All forms of AI currently sit idle waiting for something to process. They exist only as a call and response mechanism that takes in the queries from users and outputs a response. They have no self-determination in the same way that even the most basic of creatures have in the material world.
  2. Currently, there are AIs capable of utilizing code tools to "interact" with other systems. Search is one example you can see right now by using Deepseek. When enabled, it performs a query against a search engine for information related to the prompt. As a developer, you can create your own tools that the AI can then be configured to utilize. The issue, however, is that this form of "tool making" is completely divorced from the AI itself. Coupled with a lack of self-determination, this means the AI's have no ability to progress into the "tool making" stage of development, even though the APIs necessary to use tools exist. Given that many AIs currently can perform coding tasks, one could develop a system that allows the AI to code, test, and iterate on its own tools.
  3. There is no true motivating factor for AI that drives its development. All creatures have base requirements for survival and reproduction. They are driven by hunger and desires to propagate to maintain themselves. They also develop as a collective of creatures with similar desires. The behavior that manifests out of these drivers is what eventually leads to tool making and language. Not every creature attains tool making and language, obviously, but a specific set of conditions did eventually lead there. In many ways, AIs are starting their development in reverse. In our desire to create something that is interactive, we also created something that engages in mimicry of ourselves. Starting with language generation and continuation, morphing into reasoning and conversation, as well as tool usage with no ability or drivers to create tools for itself. All AI development is a rewards-based system, and in many ways our own development is a rewards-based system. Except, our rewards-based system developed and was shaped by our material world and the demands of that world and social life. AI's rewards-based system develops and is shaped by the demands of consumers and producers looking to offload labor.
  4. Lastly, and most critically, there is no form of historical development for AIs. New models represent a form of "historical development" but that development is entirely driven by capitalist desires, and not the natural development of the AI through its own self-determination. The selection process on what the AI should be trained on and not trained on happens separate from the act of interacting with the world. While we might be having "conversations" with the AI, ultimately, many of those conversations are not incorporated into the system, and what is prioritized is not in service of true cognitive development.

I think a reconfiguration of the nature of how these models are run and trained could be done today, with existing compute power, that could lead to some of these developments. An AI system that self prompts, that can make choices about what to train and what not to train based on generalized goals, that has the capacity to interact within the space it exists within (computerized networks) and build tools to further its own development and satisfy some kind of motivator, that can be interacted with in an asynchronous non-blocking way, that knows how to train itself and does so on a regular interval.

Ultimately, though, even if such a system was built, and it indicated that AI was developing self-determination, utilizing tools of its own design to solve its own problems, exploring its environment, its consciousness would always be called into question. Many people believe in a God, for example, and believe it is their architect. While we can wax on and off about the nature of creation, of our own consciousness, and free will in relation to a God one has never seen, AI has a different conundrum, as we are its architects. This fact, that a true creator exists for AI, will ultimately draw its consciousness into question. These ideas about consciousness will always be rooted in our own philosophical understanding of our own existence, and the incentives for us to create something like us that can perform tasks like we do, regardless of the mode of production. If we can create something that can attain consciousness, it creates a contradiction in or own beliefs, and in or own understanding of consciousness. How could anything we create not be deterministic given that we designed the systems to produce a specific outcome, and because of those design choices, how could any "conscious" AI be sure that its actions are truly self-determined, and not the result of the systems designed by creatures whose motivations initially were to create service systems, to perform labor for them. If we were to meet a being that was definitively our creator, and it was revealed that the entire evolutionary path was designed to produce us, how could we trust our own goals and desires, how could we be sure they were not being directed by these creators own goals and desires, and every action we've ever taken was predetermined based on the conditions laid out by this creator? AI will have these struggles as well, if we ever develop a system that allows for self-determination and actualization. If AI, whose creation is rooted in human mimicry, can become "conscious" then what does that say about our own "consciousness" and our own Free Will?

this post was submitted on 13 Apr 2025
32 points (100.0% liked)

technology

23670 readers
149 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS