214
top 50 comments
sorted by: hot top controversial new old
[-] lily33@lemm.ee 106 points 1 year ago* (last edited 1 year ago)

competition too intense

dangerous technology should not be open source

So, the actionable suggestions from this article are: reduce competition and ban open source.

I guess what it is really about, is using fear to make sure AI remains in the hands of a few...

[-] thehatfox@lemmy.world 39 points 1 year ago

Yes, this the setup for regulatory capture before regulation has even been conceived. The likes of OpenAI would like nothing more than to be legally declared the only stewards of this "dangerous" technology. The constant doom laden hype that people keep falling for is all part of the plan.

[-] lily33@lemm.ee 5 points 1 year ago* (last edited 1 year ago)

I think calling it "dangerous" in quotes is a bit disingenuous - because there is real potential for danger in the future - but what this article seems to want is totally not the way to manage that.

[-] foggy@lemmy.world 18 points 1 year ago

It would be an obvious attempt at pulling up the ladder if we were to see regulation on ai before we saw regulation on data collection from social media companies. Wen have already seen that weaponized. Why are we going to regulate something before it gets weaponized when we have other recent tech, unregulated, being weaponized?

[-] Touching_Grass@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

I saw a post the other day about how people crowd sourced scraping grocery store prices. Using that data they could present a good case for price fixing and collusion. Web scraping is already pretty taboo and this AI fear mongering will be the thing that is used to make it illegal.

[-] foggy@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

It won't be illegal because there is repeated court precedent for it to be categorically legal.

https://techcrunch.com/2022/04/18/web-scraping-legal-court/

load more comments (2 replies)
[-] Heresy_generator@kbin.social 9 points 1 year ago* (last edited 1 year ago)

It's also about distraction. The main point of the letter and the campaign behind it is slight-of-hand; to get the media obsessing over hypothetical concerns about hypothetical future AIs rather than talking about the actual concerns around current LLMs. They don't want the media talking about the danger of deepfaked videos, floods of generated disinformation, floods of generated scams, deepfaked audio scams, and on and on, so they dangle Skynet in front of them and watch the majority of the media gladly obsess over our Terminator-themed future because that's more exciting and generates more clicks than talking about things like the flood of fake news that is going to dominate every democratic election in the world from now on. Because these LLM creators would much rather see regulation of future products they don't have any idea how to build (and , even better, maybe that regulation can even entrench their own position) than regulation of what they're currently, actually doing.

[-] Touching_Grass@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

I'm going to need a legal framework to be able to DMCA any comments I see online in case they were created with an AI trained on Sara Silverman's books

[-] Hanabie@sh.itjust.works 1 points 1 year ago

That's exactly what it is.

load more comments (28 replies)
[-] Steeve@lemmy.ca 62 points 1 year ago

“Dangerous technology should not be open source, regardless of whether it is bio-weapons or software,” Tegmark said.

What a stupid alarmist take. The safest way for technology to operate is when people can see how it works, allowing experts who don't just have a financial interest in it succeeding to scrutinize it openly. And it's not like this is some magical technology that only massive corporations have access to in the first place, it's built on top of open research.

Home Depot sells all the ingredients you need to make a substantial bomb, should we ban fertilizer and pressure cookers for non-industrial use?

[-] tryptaminev@feddit.de 11 points 1 year ago

Many countries only sell fertilizers with a concentration of Ammonium-Nitrate below explosive levels.

[-] Steeve@lemmy.ca 15 points 1 year ago

How about bleach and ammonia? I can buy those ingredients at any convenience store near me and throw together some mustard gas right? Point is if we banned everything that has any potential to do harm we wouldn't even be left with rocks and sticks. Regulate, sure, but taking technology out of the hands of regular people and handing it to a select few corporations is a recipe for inequality and disaster.

[-] tryptaminev@feddit.de 3 points 1 year ago

You wouldnt make mustard gas. You'd make chlor gas,which is also very nasty but still quite a mile from mustard gas. The extent to which risky chemicals have been banned, reduced in concentration or made subject to extensive monitoring of sales and use is quite substantial.

But here is a huge difference to AI tools. Anyone could create these tools him or herself. It is information. Unlike information on how to build a nuke it is more easy to use this information for negative purposes, but the extend is much smaller. A deepfake itself cannot kill people. A selfmade pipebomb can. Meanwhile the cat is out of the hat for ML already. The tools are there and many people have copies of the code and it can be replicated countless times, whereas the clandestine bomb-builder needs to procure another batch of chemicals and hardware.

load more comments (1 replies)
[-] theluddite@lemmy.ml 18 points 1 year ago

I had Max Tegmark as a professor when I was an undergrad. I loved him. He is a great physicist and educator, so it pains me greatly to say that he has gone off the deep end with his effective altruism stuff. His work through the Future of Life Institute should not be taken seriously. For anyone interested, I responded to Tegmark's concerns about AI and Effective Altruism in general on The Luddite when they first got a lot of media attention earlier this year.

I argue that EA is an unserious and self-serving philosophy, and the concern about AI is best understood as a bad faith and self-aggrandizing justification for capitalist control of technology. You can see that here. Other commenters are noting his opposition to open sourcing "dangerous technologies." This is the inevitable conclusion of a philosophy that, as discussed in the linked post, reifies existing power structures to decide how to do the most good within them. EA necessarily excludes radical change by focusing on measurable outcomes. It's a fundamentally conservative and patronizing philosophy, so it's no surprise when its conclusions end up agreeing with the people in charge.

[-] profdc9@lemmy.world 2 points 1 year ago

I think Max Tegmark is like other public intellectuals, for example, Michio Kaku, that have to say something controversial periodically to stay in he news and maintain their reputation.

[-] theluddite@lemmy.ml 2 points 1 year ago

Maybe. It had been almost 15 years since I last heard of him until the EA stuff started going mainstream, but he was a very well respected physicist, especially for how young he was back then. After having taken several very small classes with him, it would surprise me if he was a clout chaser. People are complicated though, so who knows.

[-] mojo@lemm.ee 11 points 1 year ago* (last edited 1 year ago)

Anyone against FOSS adoption of LLMs is straight up a capitalist fascist

They love the AI ethics issue, it's so vague and morally superior that they can use it to shut down anything they like.

The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”

And this is why people who don't understand that LLMs are essentially big hallucinating math machines should have no voice in things they fundamentally do not understand

load more comments (11 replies)
[-] autotldr@lemmings.world 5 points 1 year ago

This is the best summary I could come up with:


The scientist behind a landmark letter calling for a pause in developing powerful artificial intelligence systems has said tech executives did not halt their work because they are locked in a “race to the bottom”.

Max Tegmark, a co-founder of the Future of Life Institute, organised an open letter in March calling for a six-month pause in developing giant AI systems.

Despite support from more than 30,000 signatories, including Elon Musk and the Apple co-founder Steve Wozniak, the document failed to secure a hiatus in developing the most ambitious systems.

“I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites.

“So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose thinktank researches existential threats and potential benefits from cutting-edge technology.

Mark Zuckerberg’s Meta recently released an open-source large language model, called Llama 2, and was warned by one UK expert that such a move was akin to “giving people a template to build a nuclear bomb”.


The original article contains 695 words, the summary contains 192 words. Saved 72%. I'm a bot and I'm open source!

load more comments
view more: next ›
this post was submitted on 21 Sep 2023
214 points (92.5% liked)

Technology

59598 readers
2396 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS