27
submitted 1 year ago* (last edited 1 year ago) by NuXCOM_90Percent@lemmy.zip to c/sciencefiction@lemmy.world

So finally got around to watching a recent movie that I won't name since I am not sure if it was part of the marketing, but the premise was that there was an all powerful AI that was going to take over the world and it used a mixture of predictive reasoning, control of technology, and limited human agents who were given a heads up on what was coming.

It was... mostly disappointing and felt like a much tamer version of Linda Nagata's The Red (apologies as that is TECHNICALLY a spoiler, but the twist is revealed like a hundred pages into the first book that came out a decade ago). And an even weaker version still of Person of Interest.

Because if we are in the world where an AI has access to every camera on the planet and can hack communications in real time and so forth: We aren't going to have vague predictions of what someone might do. We are going to have Finch and Root at full power literally dodging bullets (and now I am sad again) and basically being untouchable. Or the soldiers of The Red who largely have what amounts to x-ray vision so long as they trust their AI overlord and shoot where told and so forth.

Or just the reality of how existential threats can be both detected and manufactured as the situation calls for utilizing existing resources/Nations.

Any suggestions for near future (although, I wouldn't be opposed to a far future space opera take on this) stories that explore this? I don't necessarily need a Frankenstein Complex "we must stop it because it is a form of life that is not us", but I would definitely prefer an understanding of just how incredibly plausible this all is (again, I cannot gush enough about Linda Nagata's The Red). Rather than vague hand waving to demonstrate the unique power of the human soul

spoilerOr the large number of thetans within it

you are viewing a single comment's thread
view the rest of the comments
[-] Mechanismatic@lemmy.ml 5 points 1 year ago

I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there's often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?

I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn't consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.

this post was submitted on 08 Nov 2023
27 points (93.5% liked)

Science Fiction

13635 readers
2 users here now

Welcome to /c/ScienceFiction

December book club canceled. Short stories instead!

We are a community for discussing all things Science Fiction. We want this to be a place for members to discuss and share everything they love about Science Fiction, whether that be books, movies, TV shows and more. Please feel free to take part and help our community grow.

  1. Be civil: disagreements happen, but that doesn’t provide the right to personally insult others.
  2. Posts or comments that are homophobic, transphobic, racist, sexist, ableist, or advocating violence will be removed.
  3. Spam, self promotion, trolling, and bots are not allowed
  4. Put (Spoilers) in the title of your post if you anticipate spoilers.
  5. Please use spoiler tags whenever commenting a spoiler in a non-spoiler thread.

Lemmy World Rules

founded 1 year ago
MODERATORS