148
submitted 11 months ago by sculd@beehaw.org to c/technology@beehaw.org

Article from The Atlantic, archive link: https://archive.ph/Vqjpr

Some important quotes:

The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.

Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

Summary: Tech bros want money, tech bros want speed, tech bros want products.

Scientists want safety, researchers want to research...

you are viewing a single comment's thread
view the rest of the comments
[-] sonori@beehaw.org 47 points 11 months ago* (last edited 11 months ago)

The best part of Open AI’s self professed goal to make an AGI is that the more we learn about LLM’s the more it becomes clear that they inherently can never bridge the gap to AGI.

One would almost think the constant complaining about mythical dangers of AGI might be a distraction from the real more mundane dangers LLM’s pose here and now like exasperating bias, making mass misinformation easy, and of course shielding major companies from accountability.

Or the other option is that it’s just marketing, look at how scary our totally real product is, look how fast it improved when we went from a medium sized dataset to the largest that will ever be possible, don’t ask questions like why would a autocomplete that has been feed the entire internet actually help our business, just pay us and bolt it on to whatever you can.

[-] sculd@beehaw.org 19 points 11 months ago

LLM's ability to replace jobs is honestly more terrifying than so called AGI.

At least with AGI, if they really can think like human, is that they may actually think about the implications of their actions....

[-] AlternateRoute@lemmy.ca 12 points 11 months ago

Robots / automation have replaced so many human physical labor jobs, even large dumb heavy machinery.

Language models replacing mundane human language tasks is hardly surprising.

I have replaced entire employee jobs with scrips / code, there are a lot of very basic jobs out there.

[-] sonori@beehaw.org 11 points 11 months ago

Scripts and automation do what thier programmed to. There are bugs and mistakes, but you can theoretically get something programmed right. LLM’s generate text that looks like a human language. If they were just getting used to make up random bullshit it wouldn’t be a problem, but there are few applications where random bullshit is actually beneficial.

[-] AlternateRoute@lemmy.ca 3 points 11 months ago

Just like the executives assist that was tasked with scanning documents. And LLM can likely safely and quickly do many people tasks:

  • summarize meeting transcripts
  • highlight nest steps
  • take an auto line and some data and turn it into words

There are a lot of human language job tasks that have zero imagination required just the ability to read summarize and write some proper English.

[-] sonori@beehaw.org 13 points 11 months ago

Thouse all sound like things where it might be really bad if it injects untrue information, and with an LLM, by definition it has no understanding of what it’s summarizing. It could be especially bad if the people useing it actually trust what it outputs as facts about what was fed into it, but if they don’t and still check the source than what’s the point.

[-] brothershamus@kbin.social 3 points 11 months ago

"That's not writing, that's just typing!"

[-] AlternateRoute@lemmy.ca 1 points 11 months ago

If I hand someone a set of bullet notes and ask them to send out a notice in writing to the company. They are going to convert those notes into paragraphs and sentences.. Not just send out the notes.

Also MS already has a module for teams that will take the conversation transcript, and output action items based on the conversation.. It is like having a note taker during the meeting. https://www.youtube.com/watch?v=N1gpkk-MwpY

[-] HopeOfTheGunblade@kbin.social 8 points 11 months ago

Oh, I'm sure they will. That is not, in the slightest, the same as caring about said implications in ways that mean that the species won't get murked, though.

this post was submitted on 20 Nov 2023
148 points (100.0% liked)

Technology

37708 readers
156 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS