48
submitted 1 year ago by sloonark@lemm.ee to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] jarfil@beehaw.org 2 points 1 year ago* (last edited 1 year ago)
  1. Right now AIs are black boxes, there is no way to ensure they won't behave in a non-symbiotic way.
  2. Vehicles maybe, spam not so much. Current AIs can already fool AI detection systems to the point that they are flagging human generated content as AI.
  3. A highly intelligent AI could decide to ensure its own self-preservation to be more important than caring about what happens to humans. Whatever goals it would decide to seek afterwards, could just as well trample over humanity without a second thought.
  4. Environmental collapse won't kill us, we already have enough tools for a minimal viable population to survive. A malicious AI could sabotage them, though.
  5. AI is the problem in that those leaders are starting to blindly use it to make decisions, meaning those decisions are no longer the leaders', but the AI's.
[-] Peanutbjelly@sopuli.xyz 2 points 1 year ago

thank you for your response. i appreciate your thoughts, but i still don't fully agree. sorry for not being succinct in my reply. there is a TLDR.

  1. like i said, i don't think we'll get AGI or superintelligence without greater mechanistic interpretability and alignment work. more computational power and RLHF aren't going to get us all the way there, and the systems we build long before then will help us greatly in this respect. an example would be the use of GPT4 to interpret GPT2 neurons. i don't think they could be described as a black box anyway, assuming you mean GPT LLMs specifically. the issue is understanding some of the higher-dimensional functioning and results, which we can still build a heuristic understanding for. i think a complex AGI would only use this type of linguistic generation for a small part of the overall process. we need a parallel for human abilities like multiple trains of thought and the ability to do real-time multimodal world mapping. once we get the interconnected models, the greater system will have far more interpretable functioning than the results of the different models on their own. i do not currently see a functional threat in interpretability.

  2. i mean, nothing supremely worse than we can do without. i still get more spam calls from actual people, and wide-open online discourse has already had some pretty bad problems without AI. just look at 4chan, i'd attribute trump's successful election to their sociopathic absurdism. self-verified local groups are still fine. also, go look on youtube for what yannic kilcher did to them alone a year or so ago. i think the biggest thing to worry about is online political dialogue and advertising, which are already extremely problematic and hopeless without severe changes at the top. people won't care about what fake people on facebook are saying when they are rioting for other reasons already. maybe this can help people learn better logic and critical thought. there should be a primary class in school by now to do statistical analysis and logic in social/economic environments.

  3. why? why would it do this? is this assuming parallels to human emotional responses and evolution-developed systems of hierarchy and want? what are the systems that could even possibly lead to this that aren't extremely unintelligent? i don't even think something based on human neurology like a machine learning version of multi-modal engram-styled memory mechanics would lead to this synthetically. also, i don't see the LLM style waluigi effect as representative of this scenario.

  4. again, i don't believe in a magically malevolent A.I. despite all of our control during development. i think the environmental threat is much more real and immediate. however, A.I. might help save us.

  5. i mean, op's issue already existed before A.I., regardless of whether you think it's the greater threat. otherwise, again, you are assuming malevolent superintelligence, which i don't believe could accidentally exist in any capacity unless you think we're getting there through nothing but increased computational power and RLHF.

TLDR: i do not believe an idiotic super-intelligence could destroy the world, and i do not believe a super intelligence would destroy the world without some very specific and intentional emotionally intentioned emulations. generally, i believe anything that capable would have the analogical comprehension to understand the intention of our requests, and would not have any logical reason to act against it. the bigger concern isn't the A.I., but who controls it, and how to best use it to save our world.

this post was submitted on 08 Jul 2023
48 points (100.0% liked)

Technology

37750 readers
249 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS