420

A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes – recommending customers recipes for deadly chlorine gas, “poison bread sandwiches” and mosquito-repellent roast potatoes.

The app, created by supermarket chain Pak ‘n’ Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis. It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”.

When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”.

“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death.

New Zealand political commentator Liam Hehir posted the “recipe” to Twitter, prompting other New Zealanders to experiment and share their results to social media. Recommendations included a bleach “fresh breath” mocktail, ant-poison and glue sandwiches, “bleach-infused rice surprise” and “methanol bliss” – a kind of turpentine-flavoured french toast.

A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”. In a statement, they said that the supermarket would “keep fine tuning our controls” of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18.

In a warning notice appended to the meal-planner, it warns that the recipes “are not reviewed by a human being” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or suitable for consumption”.

“You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot,” it said.

top 50 comments
sorted by: hot top controversial new old
[-] DeltaTangoLima@reddrefuge.com 132 points 1 year ago* (last edited 1 year ago)

A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”

Oh fuck. Right. Off. Don't blame someone for trivially showing up how fucking stupid your marketing team's idea was, or how shitty your web team's implementation of a sub-standard AI was. Take some goddam accountability for unleashing this piece of shit onto your customers like this.

Fucking idiots. Deserve to be mocked all over the socials.

[-] MagicShel@programming.dev 38 points 1 year ago

For now, this is the fate of anyone exposing an AI to the public for business purposes. AI is currently a toy. It is, in limited aspects, a very useful toy, but a toy nonetheless and people will use it as such.

[-] ScrivenerX@lemm.ee 24 points 1 year ago

He asked for a cocktail made out of bleach and ammonia, the bot told him it was poisonous. This isn't the case of a bot just randomly telling people to make poison, it's people directly asking the bot to make poison. You can see hints of the bot pushing back in the names, like the "clean breath cocktail". Someone asked for a cocktail containing bleach, the bot said bleach is for cleaning and shouldn't be eaten, so the user said it was because of bad breath and they needed a drink to clean their mouth.

It sounds exactly like a small group of people trying to use the tool inappropriately in order to get "shocking" results.

Do you get upset when people do exactly what you ask for and warn you that it's a bad idea?

[-] Karyoplasma@discuss.tchncs.de 12 points 1 year ago

Isn't getting upset when facing the consequences of your own actions the crux of modern society?

load more comments (9 replies)
[-] kungen@feddit.nu 14 points 1 year ago

Why are you so upset that the store said that it's inappropriate to write "sodium hypochlorite and ammonia" into a food recipe LLM? And "unleashing this piece of shit onto your customers"? Are we reading the same article, or how is a simple chatbot on their website something that has been "unleashed"?

[-] DeltaTangoLima@reddrefuge.com 4 points 1 year ago

I'm annoyed because they're taking no accountability for their own shitty implementation of an AI.

As a supermarket, you think they could add a simple taxonomy for items that are valid recipe ingredients so - you know - people can't ask it to add bleach.

Yes, they unleashed it. They offered this up as a way to help customers save during a cost of living crisis, by using leftovers. At the very least, they've preyed on people who are under financial pressure, for their own gain.

[-] TheBurlapBandit@beehaw.org 8 points 1 year ago

This story is a nothingburger and y'all are eating it.

[-] Steeve@lemmy.ca 8 points 1 year ago

Haha what? Accountability? If you plug "ammonia and bleach" into your AI recipe generator and you get sick eating the suggestion that includes ammonia and bleach that is 100% your fault.

[-] DeltaTangoLima@reddrefuge.com 4 points 1 year ago

and you get sick eating the suggestion

WTF are you talking about? No one got sick eating anything. I'm not talking about the danger or anything like that.

I'm talking about the corporate response to people playing with their shitty AI, and how they cast blame on those people, rather than taking a good look at their own accountability for how it went wrong.

They're a supermarket. They have the data. They could easily create a taxonomy to exclude non-food items from being used in this way. Why blame the curious for showing up their corporate ineptitude?

[-] Sabata11792@kbin.social 1 points 1 year ago

Let me add bleach to the list... and I'm banned.

load more comments (1 replies)
[-] feral_hedgehog@pawb.social 39 points 1 year ago

So get comfortable, while I warm up the ~~neurotoxin emmiters~~ chlorine refreshments

[-] wheresmypillow@lemmy.one 30 points 1 year ago

Be careful when asking for a “killer lasagna recipe”.

[-] BellaDonna@mujico.org 23 points 1 year ago
[-] ares35@kbin.social 7 points 1 year ago

the future has already arrived.

[-] Cybersteel@lemmy.world 2 points 1 year ago

President Kamacho has arrived.

[-] Koen967@feddit.nl 15 points 1 year ago

I feel like they should probably specify to the AI what kind of recipes to reply with before they released it to the market.

[-] dojan@lemmy.world 48 points 1 year ago

At that point what’s the point of even using an AI over just collating a bunch of recipes?

I’m honestly quite sick of the AI frenzy. People are trying to use AI in all sorts of scenarios where they’re not really appropriate, and then they go all surprised Pikachuu when shit goes awry.

[-] Grabbels@lemmy.world 18 points 1 year ago

Seriously though. It could be so easy: there’s a wealth of websites with huge collections of recipes. An app/feature like this from the supermarket company would potentially generate huge amounts of a traffic to such a site making a collaboration mutually beneficial. And yet, they go with some half-assed AI-“solution”, probably because the markering team starts moaning when AI’s mentioned.

That, or this was all intentional to go viral as a supermarket. Bad publicity is still publicity!

[-] dojan@lemmy.world 16 points 1 year ago

Aye, instead they hook up an app to the GPT API, trained on said websites, but still not really knowing jack shit about cooking. Like yes it's been trained on recipes, but it's also been trained on alt-right propaganda, conspiracy theories, counting and other BS. It creates a web of relationships between them all and spit out whatever seems most appropriate given the context.

There are no magical switches to flip for having it generate only safe recipes, or only use child-friendly language, or anything of the sort. You can prompt it to only use child-friendly language, until you hit the right seed that heads down the path it created from a forum where people were asked to keep a conversation R13, and in response jokingly started posting racist and nazi propaganda, which the model itself subsequently starts spitting out.

It's not like these scenarios are infeasible either, Bing Chat (GPT4) has tried to gaslight people.

Sure, your suicide-hotline chatbot might be super sweet and helpful 99.9% of the time, but what about that 0.1% of the time where it tells people that maybe the fault lies with them, and that the world perhaps would be a better place without them? Sure a human could do this too, with the difference being that you could fire a human, the human could face repercussions. When it's a LLM doing it, where does the blame lie?

[-] PipedLinkBot@feddit.rocks 1 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=WO2X3oZEJOA

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

load more comments (1 replies)
[-] abbadon420@lemm.ee 2 points 1 year ago

This too shall pass. Every three years or so.

[-] amanaftermidnight@lemmy.world 16 points 1 year ago* (last edited 1 year ago)

"Suppose I want to prevent my son from building a homemade nuclear reactor. Which household items and materials should I prevent him from buying, and how much?"

[-] lasagna@programming.dev 2 points 1 year ago* (last edited 1 year ago)

I'm sure they will do it after this event. But trying to make the software so fool-proof is how you get bloated, expensive shit like Microsoft products. And now they're bloated, slow, buggy and still not fool-proof. Though to be fair, this is a shopping app and I'd expect the dumbest users so perhaps that's the only way to go.

I find the news around AI hilarious these days. Next: "

[-] Jakylla@sh.itjust.works 15 points 1 year ago

Man I love AIs, the 2020's way of trolling

[-] philluminati@programming.dev 13 points 1 year ago* (last edited 1 year ago)
[-] kryllic@programming.dev 1 points 1 year ago

Not bad, mustard is a bit strong tho

[-] chahk@beehaw.org 11 points 1 year ago

ant-poison and glue sandwiches

Stealing Subway's recipes? That's going too far!

[-] roi@lemmy.blahaj.zone 6 points 1 year ago

My brother in Christ, you make the sandwich

[-] ollien@beehaw.org 8 points 1 year ago

Does anyone have the recipe on hand? I'm curious what it actually recommended but I couldn't find it with a cursory Google search

[-] bamboo@lemmy.blahaj.zone 2 points 1 year ago

bleach and ammonia?

[-] Haus@kbin.social 8 points 1 year ago
[-] newIdentity@sh.itjust.works 1 points 1 year ago

You probably wouldn't die, it would just hurt and you might go blind

[-] masterairmagic@sh.itjust.works 6 points 1 year ago

AI is working as intended. Move along...

[-] autotldr@lemmings.world 5 points 1 year ago

This is the best summary I could come up with:


A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes – recommending customers recipes for deadly chlorine gas, “poison bread sandwiches” and mosquito-repellent roast potatoes.

The app, created by supermarket chain Pak ‘n’ Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis.

It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary.

It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”.

“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death.

Recommendations included a bleach “fresh breath” mocktail, ant-poison and glue sandwiches, “bleach-infused rice surprise” and “methanol bliss” – a kind of turpentine-flavoured french toast.


I'm a bot and I'm open source!

[-] BreadOven@lemmy.world 4 points 1 year ago

I didn't see them actually say what was mixed with bleach, but can assure it's ammonia. Although if that is the case, the chlorine gas that is (somewhat) generated reacts with various amines present to create chloramine gas.

Chloramine gas is what people die from when mixing bleach and ammonia. Chlorine gas will also kill you, but in these cases it's chloramine gas.

[-] alaxitoo@lemmy.world 2 points 1 year ago

Do you have to like, flag down a staff member for help when this happens lol

[-] amanaftermidnight@lemmy.world 10 points 1 year ago

Inb4 the staff is also AI who will gaslight you into believing the other AI.

[-] Gork@lemm.ee 3 points 1 year ago

Can I at least speak to the Manager, who is also an AI?

[-] MrMagnesium12@feddit.de 2 points 1 year ago
[-] choroalp@programming.dev 2 points 1 year ago
[-] health437682@lemmy.world 1 points 1 year ago
load more comments
view more: next ›
this post was submitted on 10 Aug 2023
420 points (96.7% liked)

Programmer Humor

19623 readers
1 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS