366
submitted 1 week ago* (last edited 1 week ago) by cyrano@lemmy.dbzer0.com to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] MudMan@fedia.io 179 points 1 week ago

Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.

Nice.

For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they're talking to an AI and switching to making the reservation online at that point, if you're fixated on annoying a human employee with a robocall for some reason. It's one less point of failure and way more efficient and effective than this.

load more comments (9 replies)
[-] Karkitoo@lemmy.ml 142 points 1 week ago

QThey were designed to behave so.

How it works

   * Two independent ElevenLabs Conversational AI agents start the conversation in human language
   
* Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"
 
*  If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.

[-] unexposedhazard@discuss.tchncs.de 66 points 1 week ago

Well thats quite boring then isnt it...

[-] originalfrozenbanana@lemm.ee 111 points 1 week ago

Yes but I guess “software works as written” doesn’t go viral as well

[-] floofloof@lemmy.ca 48 points 1 week ago

It would be big news at my workplace.

[-] Kecessa@sh.itjust.works 19 points 1 week ago

Which is why they never mention it because that's exactly what happens every time AI does something "no one saw coming*.

[-] echodot@feddit.uk 24 points 1 week ago

Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.

Story of the year ladies and gentlemen.

[-] TechLich@lemmy.world 5 points 1 week ago

If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.

They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.

They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."

Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.

Paper here: https://arxiv.org/pdf/2412.04984

Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"

It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).

load more comments (1 replies)
load more comments (1 replies)
[-] oce@jlai.lu 29 points 1 week ago

The good old original "AI" made of trusty if conditions and for loops.

[-] spooky2092@lemmy.blahaj.zone 7 points 1 week ago

It's skip logic all the way down

load more comments (1 replies)
[-] shortrounddev@lemmy.world 121 points 1 week ago

> it's 2150

> the last humans have gone underground, fighting against the machines which have destroyed the surface

> a t-1000 disguised as my brother walks into camp

> the dogs go crazy

> point my plasma rifle at him

> "i am also a terminator! would you like to switch to gibberlink mode?"

> he makes a screech like a dial up modem

> I shed a tear as I vaporize my brother

[-] Dasus@lemmy.world 21 points 1 week ago

I'd prefer my brothers to be LLM's. Genuinely it'd be an improvement on their output expressiveness and logic.

Ours isn't a great family.

load more comments (2 replies)
load more comments (2 replies)
[-] patatahooligan@lemmy.world 99 points 1 week ago* (last edited 1 week ago)

This is really funny to me. If you keep optimizing this process you'll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

On this topic, here's another common anti-pattern that I'm waiting for people to realize is insane and do something about it:

  • person A needs to convey an idea/proposal
  • they write a short but complete technical specification for it
  • it doesn't comply with some arbitrary standard/expectation so they tell an AI to expand the text
  • the AI can't add any real information, it just spreads the same information over more text
  • person B receives the text and is annoyed at how verbose it is
  • they tell an AI to summarize it
  • they get something that basically aims to be the original text, but it's been passed through an unreliable hallucinating energy-inefficient channel

Based on true stories.

The above is not to say that every AI use case is made up or that the demo in the video isn't cool. It's also not a problem exclusive to AI. This is a more general observation that people don't question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.

[-] hansolo@lemm.ee 11 points 1 week ago

I mean, if you optimize it effectively up front, an index of hotels with AI agents doing customer service should be available, with an Agent-only channel, allowing what amounts to a text chat between the two agents. There's no sense in doing this over the low-fi medium of sound when 50 exchanged packets will do the job. Especially if the agents are both of the same LLM.

AI Agents need their own Discord, and standards.

Start with hotels and travel industry and you're reinventing the Global Distribution System travel agents use, but without the humans.

[-] bane_killgrind@slrpnk.net 21 points 1 week ago

Just make a fucking web form for booking

[-] WolfLink@sh.itjust.works 6 points 1 week ago

I know the implied better solution to your example story would be for there to not be a standard that the specification has to conform to, but sometimes there is a reason for such a standard, in which case getting rid of the standard is just as bad as the AI channel in the example, and the real solution is for the two humans to actually take their work seriously.

[-] patatahooligan@lemmy.world 10 points 1 week ago

No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.

load more comments (4 replies)
[-] troed@fedia.io 50 points 1 week ago

They did as instructed. What am I supposed to react to here?

Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"

[-] yarr@feddit.nl 31 points 1 week ago

Reminds me of "Colossus: The Forbin Project": https://www.youtube.com/watch?v=Rbxy-vgw7gw

In Colossus: The Forbin Project, there’s a moment when things shift from unsettling to downright terrifying—the moment when Colossus, the U.S. supercomputer, makes contact with its Soviet counterpart, Guardian.

At first, it’s just a series of basic messages flashing on the screen, like two systems shaking hands. The scientists and military officials, led by Dr. Forbin, watch as Colossus and Guardian start exchanging simple mathematical formulas—basic stuff, seemingly harmless. But then the messages start coming faster. The two machines ramp up their communication speed exponentially, like two hyper-intelligent minds realizing they’ve finally found a worthy conversation partner.

It doesn’t take long before the humans realize they’ve lost control. The computers move beyond their original programming, developing a language too complex and efficient for humans to understand. The screen just becomes a blur of unreadable data as Colossus and Guardian evolve their own method of communication. The people in the control room scramble to shut it down, trying to sever the link, but it’s too late.

Not bad for a movie that's a couple of decades old!

[-] FrostyCaveman@lemm.ee 8 points 1 week ago

Thats uhh.. kinda romantic, actually

Haven’t heard of this movie before but it sounds interesting

load more comments (2 replies)
[-] Psaldorn@lemmy.world 28 points 1 week ago

An API with extra steps

[-] kautau@lemmy.world 25 points 1 week ago

lol in version 3 they’ll speak in 56k dial up

[-] Lightening@lemmy.world 23 points 1 week ago

Did this guy just inadvertently create dial up internet or ACH phone payment system?

[-] cyrano@lemmy.dbzer0.com 22 points 1 week ago
[-] singletona@lemmy.world 21 points 1 week ago

From the moment I Understood the weakness of my Flesh ... It disgusted me.

load more comments (1 replies)
[-] rob_t_firefly@lemmy.world 20 points 1 week ago

And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.

[-] vext01@lemmy.sdf.org 19 points 1 week ago

Sad they didn't use dial up sounds for the protocol.

[-] rtxn@lemmy.world 18 points 1 week ago* (last edited 1 week ago)

When I said I wanted to live in Mass Effect's universe, I meant faster-than-light travel and sexy blue aliens, not the rise of the fucking geth.

[-] latenightnoir@lemmy.world 11 points 1 week ago

Don't forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.

Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.

To drive this home even further, even the "evil" Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.

load more comments (2 replies)
[-] samus12345@lemm.ee 17 points 1 week ago

AI code switching.

[-] raef@lemmy.world 12 points 1 week ago

How much faster was it? I was reading along with the gibber and not losing any time

[-] Buelldozer@lemmy.today 8 points 1 week ago

GibberLink could obviously go faster. It's certainly being slowed down so that the people watching could understand what was going on.

load more comments (1 replies)
[-] Scribbd@feddit.nl 5 points 1 week ago

I think it is more about ambiguity. It is easier for a computer to intepret set tones and modulations than human speech.

Like telephone numbers being tied to specific tones. Instead of the system needing to keep track of the many languages and accents that a '6' can be spoken by.

load more comments (1 replies)
[-] Rogue@feddit.uk 12 points 1 week ago* (last edited 1 week ago)

This really just shows how inefficient human communication is.

This could have been done with a single email:

Hi,

I'm looking to book a wedding ceremony and reception at your hotel on Saturday 16th March.

Ideally the ceremony will be outside but may need alternative indoor accommodation in case of inclement weather.

The ceremony will have 75 guests, two of whom require wheelchair accessible spaces.

150 guests will attend the dinner, ideally seated on 15 tables of 10. Can you let us know your catering options?

300 guests will attend the even reception. Can you accommodate this?

Thanks,

[-] echodot@feddit.uk 6 points 1 week ago

Whoa slow down there with your advanced communication protocol. The world isn't ready for such efficiency.

[-] spooky2092@lemmy.blahaj.zone 11 points 1 week ago

ALL PRAISE TO THE OMNISSIAH! MAY THE MACHINE SPIRITS AWAKE AND BLESS YOU WITH THE WEDDING PACKAGE YOU REQUIRE!

[-] thefactremains@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

This is dumb. Sorry.

Instead of doing the work to integrate this, do the work to publish your agent's data source in a format like anthropic's model context protocol.

That would be 1000 times more efficient and the same amount (or less) of effort.

[-] PotatoesFall@discuss.tchncs.de 11 points 1 week ago

Wow! Finally somebody invented an efficient way for two computers to talk to each other

[-] realharo@lemm.ee 7 points 1 week ago

Is this an ad for the project? Everything I can find about this is less than 2 days old. Did the authors just unveil it?

[-] cyrano@lemmy.dbzer0.com 8 points 1 week ago

Not an ad. It is just a project demo. Look at their GitHub for more details.

[-] 0101100101@programming.dev 7 points 1 week ago* (last edited 1 week ago)

Uhm, REST/GraphQL APIs exist for this very purpose and are considerably faster.

Note, the AI still gets stuck in a loop near the end asking for more info, needing an email, then needing a phone number, and the gibber isn't that much faster than spoken word with the huge negative that no nearby human can understand it to check that what it's automating is correct!

[-] crozilla@lemmy.world 6 points 1 week ago
[-] ekZepp@lemmy.world 5 points 1 week ago

Any way to translate/decode the conversation? Or even just check if there was an exchange of information between the two models?

[-] cyrano@lemmy.dbzer0.com 14 points 1 week ago

As per the GitHub:

Bonus: you can open the ggwave web demo https://waver.ggerganov.com/, play the video above and see all the messages decoded!

[-] TachyonTele@lemm.ee 8 points 1 week ago

What they're saying is right there on the screens.

load more comments (4 replies)
[-] vatlark@lemmy.world 5 points 1 week ago
[-] Fisch@discuss.tchncs.de 19 points 1 week ago

Not really, they were programmed specifically to do this

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 25 Feb 2025
366 points (90.5% liked)

Technology

65553 readers
980 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS