64

Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

you are viewing a single comment's thread
view the rest of the comments
[-] MagicShel@programming.dev 3 points 2 months ago

Yes. Let's consider guns. Is there any objective way in which to measure the moral range of actions one can understand with a gun? No. I can murder someone in cold blood or I can defend myself. I can use it to defend my nation or I can use it to attack another - both of which might be moral or immoral depending on the circumstances.

You might remove the trigger, but then it can't be used to feed yourself, while it could still be used to rob someone.

So what possible morality can you build into the gun to prevent immoral use? None. It's a tool. It's the nature of a gun. LLM are the same. You can write laws about what people can and can't do with them, but you can't bake them into the tool and expect the tool now to be safe or useful for any particular purpose.

[-] sweng@programming.dev 5 points 2 months ago* (last edited 2 months ago)

So what possible morality can you build into the gun to prevent immoral use?

You can't build morality into it, as I said. You can build functionality into it that makes immmoral use harder.

I can e.g.

  • limit the rounds per minute that can be fired
  • limit the type of ammunition that can be used
  • make it easier to determine which weapon was used to fire a shot
  • make it easier to detect the weapon before it is used
  • etc. etc.

Society considers e.g hunting a moral use of weapons, while killing people usually isn't.

So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.

[-] snooggums@midwest.social 2 points 2 months ago

Those changes reduce lethality or improve identification. They have nothing to do with morality and do NOT reduce the chance of immoral use.

[-] sweng@programming.dev 2 points 2 months ago

Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).

[-] snooggums@midwest.social 1 points 2 months ago

They can make killing multiple people in specific locations more difficult, but they do nothing to keep someone from being able to fire a single bullet for an immoral reaspn, hence the difference between lethality and identification and morality.

The Vegas shooting would not have been less immoral if a single person or nobody died. There is a benefit to reduced lethality, especially against crowds. But again, reduced lethality doesn't reduce the chance of being used immorally.

load more comments (5 replies)
load more comments (12 replies)
this post was submitted on 08 Aug 2024
64 points (100.0% liked)

Technology

37573 readers
262 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS