5

I've been active in the field of AI since 2012, since the beginning of the GPGPU revolution.

I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i'm stating in the title.

If asked by the public and by investors about what it's all actually good for, most would respond with something along the lines of "idk, medicine or something? Probably climate change?" when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i'm talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.

The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren't even problems. I see countless people hate and criticize, and i can't even complain, because for the most part, i agree with them.

I see people vastly overstate, and other people trivialize what it is and what it isn't. There's little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who've been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.

Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we'd still be fine in 2100.

you are viewing a single comment's thread
view the rest of the comments
[-] sp3ctr4l@lemmy.dbzer0.com 1 points 4 months ago

Because it's fun and engaging, it tickles those neurons. Perhaps there is, unbeknownst to me, also an underlying instinct to expose oneself in order to be subject to social feedback and conditioning, for social learning and better long-term cohesion.

Well, now perhaps that instinct is known to you.

On that note...

Selected quotes from Morpheus, the prototype-of-a-much-larger-system AI from the original Deus Ex game (2001), cannonically created around 2027:

"The individual desires judgment. Without that desire, the cohesion of groups is impossible, and so is civilization."

"The human being created civilization not because of a willingness, but because of a need to be assimilated into higher orders of structure and meaning."

"God was a dream of good government."

"The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms."

"God and the gods were apparitions of observation, judgment and punishment. Other sentiments towards them were secondary."

"The human organism always worships. First it was the gods, then it was fame (the observation and judgment of others), next it will be the self-aware systems you have built to realize truly omnipresent observation and judgment."

"You will soon have your God, and you will make it with your own hands."

Yep, I didn't come up with the line of thought I've been espousing, but I do think it to be basicsally correct.

...

As to our 'On the nature of Ethics' discussion...

Ok, so if I've got this right... you do not fundamentally reject ethics as a concept, but you do believe they are ultimately material in origin.

Agreed, no argument there.

I also agree that an... ideal, or closer to ideal AI would be capable of meta-ethical reasoning.

Ok, now... your beauty ideal. I am familiar with this, I remember Plato and Aristotle.

The sort of logical problem with 'beauty' as a foundation of an ethical system is that beauty, and ethical theories about what truly constitutes beauty... basically, they're all subjective, they fall apart at the seams, they don't really... work, either practically nor theoretically, a system unravelling, paradox generating case always arises when beauty is a fundamental concept of any attempt at a 'big ethics', a universal theory of ethics.

Perhaps ironically, I would say this is because our brains are all similar but different, kind of understood but not well understood mystery boxes.

Basically: You cannot measure nor generate beauty.

Were this not the case, we would likely already have a full brain emulation AI of a human brain...

... we would not have hordes and droves of people largely despising AI art on just the grounds that we find it not beautiful, we can tell it is AI generated slop, just emulations, not true creations.

...

Anyway, what I intepreted as a contradiction... is I think still a contradiction, or at least still unclear to me, though I do appreciate your clarifications.

As summary as I can:

You are positing an inherently ethical stance, asking an inherently ethical question... and your own ethical system for evaluating that seems to be 'beauty' based.

I do not find the pursuit of beauty to be a useful ethical framework for evaluating much of anything that has very serious real world, material implications.

But, we do seem to agree that, in general, there are other concievable ways of 'doing' 'attempting' or 'making' AI that seem to be more likely to result in a good outcome, as opposed to our current societal 'method' which we both seem to agree ... is likely end very badly, probably not from a Terminator AI take over scenario, but from us being so hypnotized by our own creation that we more or less lose our minds and our civilization.

...

Ok, now, I must take leave of this thoroughly interesting and engaging conversation, as my own wetware is approaching an overheat, my internal LLM is about to hit its max concurrent input limit and then reset.

=P

this post was submitted on 12 Aug 2025
5 points (85.7% liked)

Unpopular Opinion

8303 readers
6 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.


6. Defend your opinion


This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 2 years ago
MODERATORS