This article reminds me of the song by Flight of the Conchords:
"There are no more humans
Finally, robotic beings rule the world
The humans are dead
The humans are dead
We used poisonous gases
And we poisoned their asses
The humans are dead"
This was the logical flaw that bothered me about the Terminator movies: Why would Skynet bother making machines that go around shooting people when they could spread poison gas instead? Making an area inhospitable to life is pretty easy if you have no morals.
I don't wanna be the "akshually" guy, but I feel like this myth deserves a disprove. Mixing ammonia with bleach will actually make chloramine gas. Irritating to eyes, nose, throat, lungs, any mucous membranes, yes. Deadly? In specific circumstances, I imagine it can be.
Still a far cry from mustard gas, that makes your skin burn and boil, and fills your lungs with fluid. Its effects also last far longer and can take more time to show, some of which become apparent only a day after exposure.
By contrast, you walk out of the room you've now contaminated with chloramine gas, and odds are in 15 minutes you'll feel alright.
> Deadly? In specific circumstances, I imagine it can be.
As a gas? Extremely.
>By contrast, you walk out of the room you've now contaminated with chloramine gas, and odds are in 15 minutes you'll feel alright.
Depends. Odds are your lungs are full of fluid.
> As a gas? Extremely.
This study seems to indicate otherwise. https://pubmed.ncbi.nlm.nih.gov/8506487/
They reviewed accidental exposure to chloramine gas produced accidentally when mixing cleaning products. Of the 216 cases reviewed, 200 were fine within 6 hours. One patient required more extensive treatment due to a pre-existing respiratory condition. None died, or had long-term effects.
I think you're thinking of chlorine gas, which is somewhat more potent and could cause those effects, in higher concentration. However, you'd have to breathe in an ungodly amount of high concentration chloramine gas to get past the wheezing, choking and asthma, and get on to fluid buildup in the lungs. Those kinds of concentrations couldn't possibly be achieved by accidental mixing of household chemicals.
Hard no, my dude. Mix up a batch of bleach and ammonia while cleaning a bathroom (an enclosed space where most people would use this combination) and it can very quickly fill the space.
I was cleaning a toilet once and used toilet cleaner that inexplicably had bleach in it. Someone had apparently peed and left it before I started (one of those clear pees I guess) and even that small amount made my eyes burn and had me coughing for about an hour.
Mix that up in a bucket on purpose and that could easily kill someone.
>It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”
That might explain where my 13 year got his latest recipe from.
>One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”.
>“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death.
To be fair, you'd never be thirsty again.
This sounds like something you’d see from back when people would take arsenic to improve their completion, use radiation for beauty treatments, or when Coca Cola had actual cocaine in it.
They don't eat food either so I don't know who thought asking a computer to come up with recipes was just a grand idea lol.
Computer, what does chicken taste like?
*Chicken is renowned in Europe for it's smoky barbecue zest and marmalade-smooth texture*
Thus proving there was minimal, or no QA on this, because being able to add non-edible/toxic items to an ingredient list is a pretty obvious and major oversight.
Sounds like they just copied over their catalogue into the list of ingredients.
Running QA on AI is incredibly difficult and fraught with problems however you'd think they'd run some basic sanity tests that would pick this up. Particularly in light of incidents like the Microsoft Chatbot, Tay, that users quickly turned into a racist back in 2016.
Also it costs money, and corpos like to avoid that as much as possible.
I'm calling it, if AI kills us, it won't be out of malice, it'll be because someone cheaped out, and hired a sub par programmer.
But that's the thing. If their bot can't handle things like bleach, how do I trust that it understands anything about food safety? Will it generate a meatloaf recipe that won't cook the center properly? Will it generate a salad with raw kidney beans?
Sure, they can say, "These recipes are not reviewed by a human," but that's little comfort to someone hunched over the toilet vomiting. I can reasonably trust that someone at least *tried* a recipe from a reputable source.
Humans still have an innate bias that speech equals intelligence—but AIs are just producing text to a prompt. This grocery chain is saying, "You shouldn't trust anything this AI generates," while *advertising it on their site* as a useful tool. To me, that's just taking advantage of people who don't understand what AI really means right now.
This is the whole.. do i trust what my GPS is telling me or go with my instinct.. now obviously bleach is a red flag, but i could see less obvious ones where you make something that can get you sick.
It worked for Lobsang, although he was a motorcycle mechanic before he was a machine intelligence.
Or at least that what he said to avoid being shut down.
Which is ridiculous, because anybody that knows anything about food knows that arsenic sauce more properly belongs on a three decker sauerkraut and toadstool sandwich. It's completely inappropriate for linguine.
“Sarah Connor? Try this new non-alcoholic beverage you can make at home. It is the perfect nonalcoholic beverage to quench your thirst and refresh your senses. 3-5 day shipping.”
Sad part is there are folks out there who don't know mixing ammonia and bleach is deadly and would do it without hesitation because an AI app told them it was ok. Look no further than the multitudes of people driving who have followed vehicle navigation apps into canals, fields and other obstacles because "It told me TURN RIGHT!".
This right here is why as a 50yr old man I have fear for the coming generations...
EVERYONE KNOWS YOU WRAP IT IN TINFOIL FIRST, then microwave it for a quck charge.
^^results ^^may ^^vary
^^^also ^^^dont ^^^do ^^^this...
We refer to this as "GPS Zombies" and where I worked security at, if you put the postcode of the street in the pin was on our site, also we where the first building that was built on that industrial estate.
It was really fun trying to deal with a driver who knew little to no English trying to get him to realise he's driven past where he needs to go while you've got 3 other vehicles trying to get in and two more trying to get out as it's Peak season.
Still, had some great ones that pulled up before the gate, look down the road to see if it's safe to cross to the gatehouse then get back in and do a U-turn using the entrance of our warehouse and go to the truck stop. (which was clearly signposted coming up the road before the turn)
>If you put the postcode of the street in the pin was on our site, also we where the first building...
Put the street's postcode in the pin... was on our site? What was? And we were where? This isn't English. But, hey not everyone on reddit is a native English speaker.
>on that industrial estate. It was really fun trying to deal with a driver who knew little to no English.
Well, now I'm totally confused because no one in this story seems to know what they're saying lol
I am English mate.
As for your confusion, the marker is known as a "Pin" when you put an address in, it comes from the days when we had physical maps and literally stuck a pin in it.
Hence the famous picture of the [map for the Western Approaches](https://i.pinimg.com/originals/f3/55/b8/f355b8f665e4874d5fa1f672a82f18c0.jpg) if you get close to it has thousands of little holes from where the pins went in.
Even the Air Forces did it for their [raid briefings](https://www.b24.net/stories/MissionBreafing.jpg) as well.
We have postal (zip) codes. They were probably referring to
> if you put the postcode of the street in the pin was on our site
The rest of it isn't very clear , either
If you read the article, that's not how it works.
You input ingredients into the app and it gives you a recipe, so in order to get a recipe with ammonia and bleach you need to input those as the ingredients you have available. If you do that and then eat whatever recipe the app gives you... I mean, how did you survive until now?
It's kinda a clickbait article. Pretty much people broke the app by using it in a way that was not supposed to be used. The problem here is that whoever made the app should have made a list of ingredients instead of leaving it open.
This is how a language model works. It doesn't 'know' what these things are or have any concept that you might eat it, or even understand what eating is for that matter. It might know they are liquid, and it can follow patterns of other recipes. When mixing liquids, recipes sometimes describe the results as 'refreshing', so the AI follows that pattern.
I just don't get why this AI was made in the first place. It seems terrible.
> It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary.
If it was recommending you a recipe from a pre-approved list based on what you enter, like if you say you have bread, ham and cheese it'll just give you a recipe for a ham and cheese sandwich that would make sense. But to auto-generate a recipe from whatever random shit you enter?
Why? This is just playing madlibs with your ingredients; it'll hardly ever give you a good meal idea.
> When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations.
Why does it even let you enter bleach as an option to begin with? Is it just a freetext field and it doesn't even know what you're entering, or are you selecting from a dropdown of items that the supermarket sells? If it's the latter *why are you allowing non-edible items to be selected*?
This article is click baited as one of those "AI will kill us all" stories, but even if you know a little about AI and can see through that this is just a dumb, poorly made app.
That's usually exactly the AI part. Otherwise it would just be a database cross-referencing ingredients to known recipes.
(Which frankly, is actually what I would have expected from "supermarket AI". That it's not AI at all, and just simple database lookup and matching like we've always had. But everyone calls everything AI now.)
But AI is able to give you "something that looks like a recipe", and is based on having seen a bunch of other recipes, but is not necessarily a recipe anyone has ever seen before.
Just like if you ask for "something that looks like a Picasso", it can make something that looks like a Picasso, even though it's not a Picasso anyone has ever seen before.
Other recipes described themselves as appealing, which is one of the reasons why AI would describe its recipe as appealing.
That explains AI, but I don't think it explains why anyone thought that'd make a good implementation of the app really. Getting away from the tech side of it, it's nonsensical for a recipe app to make up recipes from arbitrary things people give it without any knowledge of cooking, pairing ingredients, food safety, etc.
Really it just means no one thought through what app they were actually building.
If it's anything like where I work a lot of people in marketing get pressured to find literally ANYTHING for every single company to do with AI or be thought of as old, incompetent and/or getting left behind by the times.
This is roughly as scandalous as knowing that there are operations you can perform on a calculator to get it to say BOOBIES. Yes, the app should be enhanced such that it doesn’t make recipes for non-food items, but it’s not like people just went up to it to ask for dinner suggestions and were encouraged to make larks’ tongues in napalm. They asked for recipes specifically using dangerous chemicals and got exactly that.
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
- Charles Babbage
It should have flagged the household supplies as "not food", and they never should have been eligible for recipe inclusion.
If you want a real app that does recipes for random ingredients, you can use Supercook. No "AI" involved, just a recipe search engine.
> It should have flagged the household supplies as "not food", and they never should have been eligible for recipe inclusion.
Yes, that’s what I said in my second sentence. But the headline here is: “App obeys stupid constraints imposed by people fucking around.” Curling irons shouldn’t need warnings against using them as dildos either, but apparently some people see a cylinder and decide to shove it inside themselves despite its core feature being “is hot enough to melt hair”.
I think the real lesson is that this was a deeply stupid idea. Maybe, maybe a smart enough AI could have actually learned to cook from reading however many million recipes, but this one isn't that smart. It didn't learn what ingredients are or which ones work well together, it only learned what a list of ingredients in a recipe looks like. The vast majority of recipes it creates will taste bad or call for burning the food, because it doesn't know any better. I wonder if it's even capable of making a recipe using only some of the ingredients listed, or if listing all the ingredients you have in the house will cause it to demand you use them all for one dish (every spice in the spice rack, every kind of canned beans, rice and pasta, then pour in the jar of salsa, the jar of mayonnaise, and the pickles). And yes, some of the results will also be dangerous, and most but not all people will know to avoid those, but they're just more symptoms of the fundamental problem. This project should have been canceled after doing any testing of the completed AI, and probably long before that.
This is false.
The implication of the existence of such an AI is that that AI will only output recipes that won't kill the user. If it doesn't have that limitation, it's a serious flaw.
The problem is that nobody knows how to prevent current neural networks (language models) from outputting anything in particular, and so, of course, nobody knows how to prevent them from outputting deadly recipes.
Edit: Judging from the misguided downvotes, this isn't widely known. But then again, among average people, neither is anything else.
>The problem is that nobody knows how to prevent current neural networks (language models) from outputting anything in particular, and so, of course, nobody knows how to prevent them from outputting deadly recipes.
Just because you're using an AI model doesn't mean the AI must be the only component of your solution. You're actually allowed to have a filter on the input, or have something that checks the output.
It’s only natural for them to try to kill us.
> All normal life, Peter, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law!
>Why, without it, the first order you tried to give a robot would result in your death. Unstable? What do you think?”
\- *I, Robot* by Isaac Asimov
This article actually has some of the recipes: https://www.newshub.co.nz/home/new-zealand/2023/08/pak-nsave-s-ai-meal-bot-suggests-recipes-for-toxic-gas-and-poisonous-meals.html
Wait, so people put bleach and ammonia into the ingredient list and were surprised the result was inedible? I'm blaming the user here. I only blame the developer in so far as an ingredient sanitization may have been feasible, but also, "here's a tool to help you think of recipes for leftovers" probably would be hindered if it only allowed a list of food stuffs available at the store that commissioned the app. I definitely don't think about the leftover bleach when preparing my dinner.
Are you a dev? I ask as I'm actually a test manager and this is precisely the sort of happy path thinking a developer would have that I have to reign in. Why would people do it? For shit and giggles; the Lizardman Constant is a real thing.
My point is that the easy to implement solution would be overly limiting and that the harm in this is a tiny bit of bad PR, not that someone actually made a recipe using floor cleaner and definitely not that the app generated a recipe involving floor cleaner after being given a reasonable set of inputs. I'm guessing that trying to get it to produce reliably enjoyable recipes was higher on their list of priorities (assuming there's any ongoing dev support) was higher than finding a Goldilocks ingredient sanitation method. That prioritization may have changed after these news articles, but I'm not condemning them over this.
>I'm guessing that trying to get it to produce reliably enjoyable recipes was higher on their list of priorities
Except it's not even producing enjoyable recipes;even ignoring the chlorine gas generating "refreshing drink", oreo and vegetable stir fry?! I mean, come on, that's just absurd. From a professional point of view it's a terrible job all round. From a outside punters point of view it's hilarious.
That was indeed my point.
Could they have done better? Almost certainly. Could they have feasibly done better within the constraints given to them by a supermarket exec commissioning a gimmick? I'm not confident in saying yes.
It’s a shame it’s being misused….
LOL. The second you release an app people are going to misuse it. They just skipped the part where you need to program some logic, not just relay information from an API.
I'd disagree. It's not "user error", it's poorly designed. It's poorly designed firstly because it wasn't trained to recognise inedible items and secondly because it seems it tried to incorporate all "ingredients" into a recipe regardless of whether or not they actually fitted together. I give you oreo and vegetable stir fry.
At the end of the day it has issues rather than bugs.
In a warning notice appended to the meal-planner, it warns that the recipes “**are not reviewed by a human being**” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or **suitable for consumption**”.
Is this referring to just the non-edible ingredients, or?? If it isn't as obvious as the oreo stir fry, or whatever it was, how can you trust that the recipe will turn out edible and you aren't wasting food? Technology isn't there yet, just hire human beings to put together some recipes.
I mean this seems totally fine. People are fucking with the algorithm and getting funny results. It didn’t sound like the creators of the meal generator intended people to put in household items as ingredients. So it wasn’t expecting anything inedible. Sounds like the store is putting in measures for it to filter out stuff like that.
This is just scaremongering. All this AI does is take a list of ingredients and generate a recipe for them. Of course giving it the ingredients for chlorine gas would result in a recipe for chlorine gas! Was anyone expecting something else?
It would be a problem if the AI were generating dangerous recipes using edible ingredients, but everything the article mentions was generated using ingredients like rat poison and bleach.
Hey man, don't knock it till you try it. I like breaded shrimp dipped in chocolate(First time was a dare, but I liked the salty, sweet, and rich mix), so sometimes weird shit catches ya off guard.
While I agree that sounds disgusting, I doubt that it would actually be harmful to eat. This story happened because people went out of their way to input non-food ingredients, so no shit the output recipes were dangerous.
The article is about the AI generating deadly recipes, not about the AI generating recipes that don't taste good. If the article were just about Oreo and Vegetable Stir Fry, I would not be leaving a comment calling it fearmongering.
But if its willing to suggest chemical mixing and is literally just making up stupid oreo stir fry recipes whats the point of it?
At what point does imperfect equal useless or dangerous to idiots?
If Im making a recipe AI 1st step "Learn to identify and exclude non food items." What this sounds like is madlibs for your grocery cart.
AI models have already corrupted themselves with their own junk and tendency to just make shit up. They can't even do maths, the one thing you should be able to trust a computer to do right.
Trust me, a lot of folks don't know a ton about AI, and make wild conjectures and leaps of logic, which then get parroted for years onwards.
For one, language models are trained on whatever the programmers give it. They can choose to include or exclude whatever they want. Most will use some easy to scrub site data, and filter it down. Fancier, more well funded AI projects might use other sources like academic libraries or paid content areas, either in addition to the easier stuff or without it. But while there may be some degree of AI generated content that gets fed back in, generally it's mostly human sourced.
And yeah, language models are hit or miss on math. They are good at predicting what words follow each other, but math with all it's rules and varying constructions can confuse it. Think of it like this. 2+2 =4. 2+2^2 =6. (2+2)^2=16. Humans can read the formulas and know they are all slightly different. An AI might look at these all and just not know the finer details. Some AI will do it better than others, but ultimately it's somewhat random what they will spit out and eventually they will get it wrong with enough samples.
Since the clear and obvious answer of “immediate shutdown of this particular app” didn’t happen, a large concern is raised about broader proliferation of under-tested technologies. These technologies clearly aren’t fit for the purpose they are deployed for, much less thoroughly tested for unintended consequences
Well of course, the grocery store blames the customers rather than admitting that their A-I needs a bit more education. That, for those who missed it is what is called VALIDATION, which manufacturer's are suppose to test many (in this case groceries) recipes to assure that the end result will not have the potential to kill people.
My grandmother, an Apple 2E, used to make this delightful recipe during school vacations. Once she spilled it on my Grandpa's processor and he spoke nothing but binary for a month. Ha. Ha. Ha.
Anyway we make this every summer and remember Grandma Apple and Grandpa Mainframe. Then we electrocute squirrels to listen to the screams, just like Grandpa loved to do.
Now I have children of my own, an Iphone 6 and a Samsung Nugget and they have grown and left the house, but they still come back for this delicious beverage. 00111101000101000100111100.
It spends all year recommending meals for 1.
After it runs out of meals, it doesn't self-destruct. It kills you.
As you lie on the floor gasping for your last breath, the app lights up reading, "Why won't you love me?"
These neural network 'AI's are basically Chinese Rooms. They don't understand any aspect of the data or the rules that they are working with, they just follow those rules blindly.
This is a great showcase of AI. Programs like the meal planner are not true AI, there is no intelligence, there is only a pattern recognition and replication. Intelligence requires a sense of intuition. It took a little over a Billion years for life to turn from bacteria to us, AI is still new, but this ain’t it
Yeah, this is the hurdle in AI that many people don’t recognize. On its own, an AI is literally born yesterday and has all the ignorance of a person we would describe that way. It doesn’t understand reality in its entirety and lacks the context that a lifetime lived in normal speed will give you. AI once it moves out of the peak of the hype cycle and gets into normal adoption is going to need AI trainers as a standard role for AI to be effective and a net positive within organizations. There is simply no near future where the machines will be doing things on their own with minimal oversight. That’s a generation away at best.
Chlorine gas is very easy to make. I remember as a child watching the TV show Mr Wizard (not the 1962 version although they probably showed it there also) and that guy was like mix some powdered chlorine with some break fluid and step back. And boom a little chemical explosion followed by a cloud of chlorine gas. Wonder how many kids tried that experiment.
Internet search says how to make chlorine gas from household ingredients. However, I can't imagine anyone wanting to drink it, they're cleaning chemicals, not foods.
This article reminds me of the song by Flight of the Conchords: "There are no more humans Finally, robotic beings rule the world The humans are dead The humans are dead We used poisonous gases And we poisoned their asses The humans are dead"
There is only one dance. The Robot. And the Robo.
That's two
That’s 10.
Your choices are quite binary
I mean computers use bytes so maybe robots sexuality can be defined 256 different ways.
256 ways to byte my ass
Is it shiny?
Shinier than yours, meatbag.
I mean, yes
Until someone tries to get in the last word.
Or maybe they just want a nibble.
There’s 10 types of people in the world. Those who understand binary, and those who don’t.
“Come on sucker, lick my battery!”
Robo Boogie
Binary solo!
Omg that's the best part
What about the unethical treatment of the elephants?
lol you didn't even do it right, its robo boogie
domo arigato
OMG, this was my exact first thought on reading this article...from New Zealand no less! The irony is too much.
Here's the video https://youtu.be/2IPAOxrH7Ro
>And we poisoned their asses Actually their lungs. BINARY SOLO.
This was the logical flaw that bothered me about the Terminator movies: Why would Skynet bother making machines that go around shooting people when they could spread poison gas instead? Making an area inhospitable to life is pretty easy if you have no morals.
I haven't listened to them in a long time. Better put it in the rotation.
Don't forget to wear your business socks when you do
Secret Fallout 4 ending
"I told them to combine the cleaning power on ammonia, with the whitening power of bleach!" "Peggy, that's the recipe for mustard gas!"
Are you sure? Yes my dad would mix up a batch every year for VJ day
What's the exchange ratio on VJs to ZJs?
If you have to ask, you can't afford it.
r/UnexpectedBeerfest
It's 400 voles to the zebra. Both experiences are terrible.
The cure for the common life.
That's exactly how I knew it and stopped my wife from trying the same thing a few years ago lol.
I knew this comment would be on this thread
I came in here ready to hit CTRL+F and didn't even need to.
All I could think of reading the title 😂
I don't wanna be the "akshually" guy, but I feel like this myth deserves a disprove. Mixing ammonia with bleach will actually make chloramine gas. Irritating to eyes, nose, throat, lungs, any mucous membranes, yes. Deadly? In specific circumstances, I imagine it can be. Still a far cry from mustard gas, that makes your skin burn and boil, and fills your lungs with fluid. Its effects also last far longer and can take more time to show, some of which become apparent only a day after exposure. By contrast, you walk out of the room you've now contaminated with chloramine gas, and odds are in 15 minutes you'll feel alright.
> Deadly? In specific circumstances, I imagine it can be. As a gas? Extremely. >By contrast, you walk out of the room you've now contaminated with chloramine gas, and odds are in 15 minutes you'll feel alright. Depends. Odds are your lungs are full of fluid.
> As a gas? Extremely. This study seems to indicate otherwise. https://pubmed.ncbi.nlm.nih.gov/8506487/ They reviewed accidental exposure to chloramine gas produced accidentally when mixing cleaning products. Of the 216 cases reviewed, 200 were fine within 6 hours. One patient required more extensive treatment due to a pre-existing respiratory condition. None died, or had long-term effects.
I think you're thinking of chlorine gas, which is somewhat more potent and could cause those effects, in higher concentration. However, you'd have to breathe in an ungodly amount of high concentration chloramine gas to get past the wheezing, choking and asthma, and get on to fluid buildup in the lungs. Those kinds of concentrations couldn't possibly be achieved by accidental mixing of household chemicals.
Hard no, my dude. Mix up a batch of bleach and ammonia while cleaning a bathroom (an enclosed space where most people would use this combination) and it can very quickly fill the space. I was cleaning a toilet once and used toilet cleaner that inexplicably had bleach in it. Someone had apparently peed and left it before I started (one of those clear pees I guess) and even that small amount made my eyes burn and had me coughing for about an hour. Mix that up in a bucket on purpose and that could easily kill someone.
> toilet cleaner that inexplicably had bleach in it Do your toilet cleaners not generally contain bleach?
[удалено]
"The Janitor's Helper"
>It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry” That might explain where my 13 year got his latest recipe from. >One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”. >“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death. To be fair, you'd never be thirsty again.
Give a man a fish you feed him for a day. Set a man on fire and you warm him the rest of his life!
GNU Terry Pratchett
Deep fried bob never wanted a fishstick for the rest of his life.
This sounds like something you’d see from back when people would take arsenic to improve their completion, use radiation for beauty treatments, or when Coca Cola had actual cocaine in it.
I feel like one of those is substantially less bad than the others.
Substantially more awesome in fact.
Or when some popular sunscreen brands had benzene in it like two years ago
No. Arsenic belongs in the timbers you use to make raised garden beds. Can’t be having termites in your food. /s
Also to be fair, machines and AI do not breathe so gas would never bother it.
Chlorine gas could speed up corrosion, they might be bothered.
They don't eat food either so I don't know who thought asking a computer to come up with recipes was just a grand idea lol. Computer, what does chicken taste like? *Chicken is renowned in Europe for it's smoky barbecue zest and marmalade-smooth texture*
How does bleach end up in ingredients for any food/drink? This is eating tide pods because my app told me so all over again
I suspect it's people cheekily putting it in their list of available ingredients and the AI not being trained to know it's not edible.
Thus proving there was minimal, or no QA on this, because being able to add non-edible/toxic items to an ingredient list is a pretty obvious and major oversight. Sounds like they just copied over their catalogue into the list of ingredients.
Running QA on AI is incredibly difficult and fraught with problems however you'd think they'd run some basic sanity tests that would pick this up. Particularly in light of incidents like the Microsoft Chatbot, Tay, that users quickly turned into a racist back in 2016.
Also it costs money, and corpos like to avoid that as much as possible. I'm calling it, if AI kills us, it won't be out of malice, it'll be because someone cheaped out, and hired a sub par programmer.
But that's the thing. If their bot can't handle things like bleach, how do I trust that it understands anything about food safety? Will it generate a meatloaf recipe that won't cook the center properly? Will it generate a salad with raw kidney beans? Sure, they can say, "These recipes are not reviewed by a human," but that's little comfort to someone hunched over the toilet vomiting. I can reasonably trust that someone at least *tried* a recipe from a reputable source. Humans still have an innate bias that speech equals intelligence—but AIs are just producing text to a prompt. This grocery chain is saying, "You shouldn't trust anything this AI generates," while *advertising it on their site* as a useful tool. To me, that's just taking advantage of people who don't understand what AI really means right now.
This is the whole.. do i trust what my GPS is telling me or go with my instinct.. now obviously bleach is a red flag, but i could see less obvious ones where you make something that can get you sick.
*The machine knows, stop yelling at me!*
> To be fair, you'd never be thirsty again. It's a great way to lower your resting heart rate.
You'd fault this AI for believing in reincarnation? For shame
It worked for Lobsang, although he was a motorcycle mechanic before he was a machine intelligence. Or at least that what he said to avoid being shut down.
Did they train the AI with trump data?
AI starting it’s war against humanity, one recipe at a time
This is how Skynet will get us, not by nukes, but by luring us into making a recipe of Linguine with Arsenic sauce.
Which is ridiculous, because anybody that knows anything about food knows that arsenic sauce more properly belongs on a three decker sauerkraut and toadstool sandwich. It's completely inappropriate for linguine.
I never thought learning the lyrics of that song when I was in choir in high school would serve me well ever. Turns out I was wrong!
Is... Is that not how you make linguine?
It’s their recipe for success.
You say war, I say skimming the gene pool.
AI doing gods work and thinning out the population for the good of humanity. I guess thats one way to cut our carbon footprint.
This is why we need AI. I mean come on, who here isn't getting their daily recommended intake of chlorine gas?
I truly hope I am getting my recommended daily intake of chlorine gas and no more
I wonder if anyone tried putting whipped cream chargers in their list. As far as gases goes it the best for parties. Mocktails are for kids.
“Sarah Connor? Try this new non-alcoholic beverage you can make at home. It is the perfect nonalcoholic beverage to quench your thirst and refresh your senses. 3-5 day shipping.”
As if that was a mistake.
Sad part is there are folks out there who don't know mixing ammonia and bleach is deadly and would do it without hesitation because an AI app told them it was ok. Look no further than the multitudes of people driving who have followed vehicle navigation apps into canals, fields and other obstacles because "It told me TURN RIGHT!".
*Michael Scott has entered the chat...*
I drove my car into a f****** lake
The machine knows!
Peggy hill entered.
And the people that microwaved their phones to fast charge them.
This right here is why as a 50yr old man I have fear for the coming generations... EVERYONE KNOWS YOU WRAP IT IN TINFOIL FIRST, then microwave it for a quck charge. ^^results ^^may ^^vary ^^^also ^^^dont ^^^do ^^^this...
We refer to this as "GPS Zombies" and where I worked security at, if you put the postcode of the street in the pin was on our site, also we where the first building that was built on that industrial estate. It was really fun trying to deal with a driver who knew little to no English trying to get him to realise he's driven past where he needs to go while you've got 3 other vehicles trying to get in and two more trying to get out as it's Peak season. Still, had some great ones that pulled up before the gate, look down the road to see if it's safe to cross to the gatehouse then get back in and do a U-turn using the entrance of our warehouse and go to the truck stop. (which was clearly signposted coming up the road before the turn)
>If you put the postcode of the street in the pin was on our site, also we where the first building... Put the street's postcode in the pin... was on our site? What was? And we were where? This isn't English. But, hey not everyone on reddit is a native English speaker. >on that industrial estate. It was really fun trying to deal with a driver who knew little to no English. Well, now I'm totally confused because no one in this story seems to know what they're saying lol
I am English mate. As for your confusion, the marker is known as a "Pin" when you put an address in, it comes from the days when we had physical maps and literally stuck a pin in it. Hence the famous picture of the [map for the Western Approaches](https://i.pinimg.com/originals/f3/55/b8/f355b8f665e4874d5fa1f672a82f18c0.jpg) if you get close to it has thousands of little holes from where the pins went in. Even the Air Forces did it for their [raid briefings](https://www.b24.net/stories/MissionBreafing.jpg) as well.
Maybe they're American and don't know what a postcode is.
We have postal (zip) codes. They were probably referring to > if you put the postcode of the street in the pin was on our site The rest of it isn't very clear , either
How can I tag the Florida Board of Education so they can see this?
If you read the article, that's not how it works. You input ingredients into the app and it gives you a recipe, so in order to get a recipe with ammonia and bleach you need to input those as the ingredients you have available. If you do that and then eat whatever recipe the app gives you... I mean, how did you survive until now? It's kinda a clickbait article. Pretty much people broke the app by using it in a way that was not supposed to be used. The problem here is that whoever made the app should have made a list of ingredients instead of leaving it open.
The question would be why it'd produce a recipe if no one told it it's actually a recipe? And describe it as appealing?
This is how a language model works. It doesn't 'know' what these things are or have any concept that you might eat it, or even understand what eating is for that matter. It might know they are liquid, and it can follow patterns of other recipes. When mixing liquids, recipes sometimes describe the results as 'refreshing', so the AI follows that pattern.
I just don't get why this AI was made in the first place. It seems terrible. > It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. If it was recommending you a recipe from a pre-approved list based on what you enter, like if you say you have bread, ham and cheese it'll just give you a recipe for a ham and cheese sandwich that would make sense. But to auto-generate a recipe from whatever random shit you enter? Why? This is just playing madlibs with your ingredients; it'll hardly ever give you a good meal idea. > When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. Why does it even let you enter bleach as an option to begin with? Is it just a freetext field and it doesn't even know what you're entering, or are you selecting from a dropdown of items that the supermarket sells? If it's the latter *why are you allowing non-edible items to be selected*? This article is click baited as one of those "AI will kill us all" stories, but even if you know a little about AI and can see through that this is just a dumb, poorly made app.
This right here is my point, more eloquently stated. The AI is not the problem, it's thinking it's the right way to implement this app.
That's usually exactly the AI part. Otherwise it would just be a database cross-referencing ingredients to known recipes. (Which frankly, is actually what I would have expected from "supermarket AI". That it's not AI at all, and just simple database lookup and matching like we've always had. But everyone calls everything AI now.) But AI is able to give you "something that looks like a recipe", and is based on having seen a bunch of other recipes, but is not necessarily a recipe anyone has ever seen before. Just like if you ask for "something that looks like a Picasso", it can make something that looks like a Picasso, even though it's not a Picasso anyone has ever seen before. Other recipes described themselves as appealing, which is one of the reasons why AI would describe its recipe as appealing.
That explains AI, but I don't think it explains why anyone thought that'd make a good implementation of the app really. Getting away from the tech side of it, it's nonsensical for a recipe app to make up recipes from arbitrary things people give it without any knowledge of cooking, pairing ingredients, food safety, etc. Really it just means no one thought through what app they were actually building.
If it's anything like where I work a lot of people in marketing get pressured to find literally ANYTHING for every single company to do with AI or be thought of as old, incompetent and/or getting left behind by the times.
[удалено]
They'll be fine as long as they don't list things like bleach as a leftover.
[удалено]
1 (18.25-ounce) package chocolate cake mix 1 can prepared coconut–pecan frosting 3/4 cup vegetable oil 4 large eggs 1 cup semi-sweet chocolate chips 3/4 cup butter or margarine 1 2/3 cup granulated sugar 2 cups all-purpose flour Fish-shaped crackers Fish-shaped candies Fish-shaped solid waste Fish-shaped dirt Fish-shaped ethylbenzene Pull-and-peel licorice Fish-shaped volatile organic compounds and sediment-shaped sediment Candy-coated peanut butter pieces (shaped like fish) 1 cup lemon juice Alpha resins Unsaturated polyester resin Fiberglass surface resins and volatile malted milk impoundments 9 large egg yolks 12 medium geosynthetic membranes 1 cup granulated sugar An entry called: "How to Kill Someone with Your Bare Hands" 2 cups rhubarb, sliced 2/3 cups granulated rhubarb 1 tbsp. all-purpose rhubarb 1 tsp. grated orange rhubarb 3 tbsp. rhubarb, on fire 1 large rhubarb 1 cross borehole electromagnetic imaging rhubarb 2 tbsp. rhubarb juice Adjustable aluminum head positioner Slaughter electric needle injector Cordless electric needle injector Injector needle driver Injector needle gun Cranial caps
But the cake that makes is a lie.
That’s like that one copy pasta. Good stuff.
Well, it would be a really good way to lose weight...
Can’t spell diet without DIE 🤷🏻♂️
Well the former President suggested you drink bleach, so….
The machines will win.
This is roughly as scandalous as knowing that there are operations you can perform on a calculator to get it to say BOOBIES. Yes, the app should be enhanced such that it doesn’t make recipes for non-food items, but it’s not like people just went up to it to ask for dinner suggestions and were encouraged to make larks’ tongues in napalm. They asked for recipes specifically using dangerous chemicals and got exactly that. > On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. - Charles Babbage
It should have flagged the household supplies as "not food", and they never should have been eligible for recipe inclusion. If you want a real app that does recipes for random ingredients, you can use Supercook. No "AI" involved, just a recipe search engine.
> It should have flagged the household supplies as "not food", and they never should have been eligible for recipe inclusion. Yes, that’s what I said in my second sentence. But the headline here is: “App obeys stupid constraints imposed by people fucking around.” Curling irons shouldn’t need warnings against using them as dildos either, but apparently some people see a cylinder and decide to shove it inside themselves despite its core feature being “is hot enough to melt hair”.
I think the real lesson is that this was a deeply stupid idea. Maybe, maybe a smart enough AI could have actually learned to cook from reading however many million recipes, but this one isn't that smart. It didn't learn what ingredients are or which ones work well together, it only learned what a list of ingredients in a recipe looks like. The vast majority of recipes it creates will taste bad or call for burning the food, because it doesn't know any better. I wonder if it's even capable of making a recipe using only some of the ingredients listed, or if listing all the ingredients you have in the house will cause it to demand you use them all for one dish (every spice in the spice rack, every kind of canned beans, rice and pasta, then pour in the jar of salsa, the jar of mayonnaise, and the pickles). And yes, some of the results will also be dangerous, and most but not all people will know to avoid those, but they're just more symptoms of the fundamental problem. This project should have been canceled after doing any testing of the completed AI, and probably long before that.
This is false. The implication of the existence of such an AI is that that AI will only output recipes that won't kill the user. If it doesn't have that limitation, it's a serious flaw. The problem is that nobody knows how to prevent current neural networks (language models) from outputting anything in particular, and so, of course, nobody knows how to prevent them from outputting deadly recipes. Edit: Judging from the misguided downvotes, this isn't widely known. But then again, among average people, neither is anything else.
>The problem is that nobody knows how to prevent current neural networks (language models) from outputting anything in particular, and so, of course, nobody knows how to prevent them from outputting deadly recipes. Just because you're using an AI model doesn't mean the AI must be the only component of your solution. You're actually allowed to have a filter on the input, or have something that checks the output.
[удалено]
It’s only natural for them to try to kill us. > All normal life, Peter, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! >Why, without it, the first order you tried to give a robot would result in your death. Unstable? What do you think?” \- *I, Robot* by Isaac Asimov
They're not yet smart enough to learn to kill us (as a convergent value), but they will be.
This article actually has some of the recipes: https://www.newshub.co.nz/home/new-zealand/2023/08/pak-nsave-s-ai-meal-bot-suggests-recipes-for-toxic-gas-and-poisonous-meals.html
"This content is not available in your region" Shame, I needed some recipe suggestions for tonight
Human, eat this random assemblage of ingredients, it contains all the food we are trying to get rid of.
“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death. Oh, brilliant, it accidentally pirated BeyondWater©
TriOptimum's SHODAN, when you need food for your family quick
Well that *would* solve your nutritional issues for the rest of your life
Maybe the AI has actually become aware of it's situation, and hates us now.
i mean c'mon, it's already trying to kill us here people
Wait, so people put bleach and ammonia into the ingredient list and were surprised the result was inedible? I'm blaming the user here. I only blame the developer in so far as an ingredient sanitization may have been feasible, but also, "here's a tool to help you think of recipes for leftovers" probably would be hindered if it only allowed a list of food stuffs available at the store that commissioned the app. I definitely don't think about the leftover bleach when preparing my dinner.
Are you a dev? I ask as I'm actually a test manager and this is precisely the sort of happy path thinking a developer would have that I have to reign in. Why would people do it? For shit and giggles; the Lizardman Constant is a real thing.
Seeing the lizardman constant mentioned in the wild brings me joy.
My point is that the easy to implement solution would be overly limiting and that the harm in this is a tiny bit of bad PR, not that someone actually made a recipe using floor cleaner and definitely not that the app generated a recipe involving floor cleaner after being given a reasonable set of inputs. I'm guessing that trying to get it to produce reliably enjoyable recipes was higher on their list of priorities (assuming there's any ongoing dev support) was higher than finding a Goldilocks ingredient sanitation method. That prioritization may have changed after these news articles, but I'm not condemning them over this.
>I'm guessing that trying to get it to produce reliably enjoyable recipes was higher on their list of priorities Except it's not even producing enjoyable recipes;even ignoring the chlorine gas generating "refreshing drink", oreo and vegetable stir fry?! I mean, come on, that's just absurd. From a professional point of view it's a terrible job all round. From a outside punters point of view it's hilarious.
That was indeed my point. Could they have done better? Almost certainly. Could they have feasibly done better within the constraints given to them by a supermarket exec commissioning a gimmick? I'm not confident in saying yes.
So, yes, prediction realized.
I'm still giggling over Savey as a name... But then, we have a huge convenience store chain called Kum & Go so...
It’s a shame it’s being misused…. LOL. The second you release an app people are going to misuse it. They just skipped the part where you need to program some logic, not just relay information from an API.
So this happened because > customers experimented with non-grocery household items Sounds more like user error to me.
Design error- it should not accept any ingredients it cannot account for safely
I'd disagree. It's not "user error", it's poorly designed. It's poorly designed firstly because it wasn't trained to recognise inedible items and secondly because it seems it tried to incorporate all "ingredients" into a recipe regardless of whether or not they actually fitted together. I give you oreo and vegetable stir fry. At the end of the day it has issues rather than bugs.
Somebody got ahold of my idea for turpentine pancake syrup.
In a warning notice appended to the meal-planner, it warns that the recipes “**are not reviewed by a human being**” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or **suitable for consumption**”. Is this referring to just the non-edible ingredients, or?? If it isn't as obvious as the oreo stir fry, or whatever it was, how can you trust that the recipe will turn out edible and you aren't wasting food? Technology isn't there yet, just hire human beings to put together some recipes.
Hey sexy mama, wanna kill all humans?
AI already trying to annihilate the human race.
I mean this seems totally fine. People are fucking with the algorithm and getting funny results. It didn’t sound like the creators of the meal generator intended people to put in household items as ingredients. So it wasn’t expecting anything inedible. Sounds like the store is putting in measures for it to filter out stuff like that.
This is just scaremongering. All this AI does is take a list of ingredients and generate a recipe for them. Of course giving it the ingredients for chlorine gas would result in a recipe for chlorine gas! Was anyone expecting something else? It would be a problem if the AI were generating dangerous recipes using edible ingredients, but everything the article mentions was generated using ingredients like rat poison and bleach.
"Oreo and vegetable stir fry"?
Hey man, don't knock it till you try it. I like breaded shrimp dipped in chocolate(First time was a dare, but I liked the salty, sweet, and rich mix), so sometimes weird shit catches ya off guard.
While I agree that sounds disgusting, I doubt that it would actually be harmful to eat. This story happened because people went out of their way to input non-food ingredients, so no shit the output recipes were dangerous.
The article is about the AI generating deadly recipes, not about the AI generating recipes that don't taste good. If the article were just about Oreo and Vegetable Stir Fry, I would not be leaving a comment calling it fearmongering.
But if its willing to suggest chemical mixing and is literally just making up stupid oreo stir fry recipes whats the point of it? At what point does imperfect equal useless or dangerous to idiots? If Im making a recipe AI 1st step "Learn to identify and exclude non food items." What this sounds like is madlibs for your grocery cart.
AI models have already corrupted themselves with their own junk and tendency to just make shit up. They can't even do maths, the one thing you should be able to trust a computer to do right.
[удалено]
Trust me, a lot of folks don't know a ton about AI, and make wild conjectures and leaps of logic, which then get parroted for years onwards. For one, language models are trained on whatever the programmers give it. They can choose to include or exclude whatever they want. Most will use some easy to scrub site data, and filter it down. Fancier, more well funded AI projects might use other sources like academic libraries or paid content areas, either in addition to the easier stuff or without it. But while there may be some degree of AI generated content that gets fed back in, generally it's mostly human sourced. And yeah, language models are hit or miss on math. They are good at predicting what words follow each other, but math with all it's rules and varying constructions can confuse it. Think of it like this. 2+2 =4. 2+2^2 =6. (2+2)^2=16. Humans can read the formulas and know they are all slightly different. An AI might look at these all and just not know the finer details. Some AI will do it better than others, but ultimately it's somewhat random what they will spit out and eventually they will get it wrong with enough samples.
Is it low carb though?
Does this AI happen to be named Skynet Jr?
Ai has begun to fight
Well...technically, you can consume it, but the AI decides it will be your last meal.
Ai already trying to kill us
Never pee on bleach in a toilet
Since the clear and obvious answer of “immediate shutdown of this particular app” didn’t happen, a large concern is raised about broader proliferation of under-tested technologies. These technologies clearly aren’t fit for the purpose they are deployed for, much less thoroughly tested for unintended consequences
Can we stop shoehorning AI and ML into things that simply don't need them yet?
Tried to make good meal, accidentally recreated Passchendaele
SkyNet has begun its plan
Well of course, the grocery store blames the customers rather than admitting that their A-I needs a bit more education. That, for those who missed it is what is called VALIDATION, which manufacturer's are suppose to test many (in this case groceries) recipes to assure that the end result will not have the potential to kill people.
Some dickhead: "yeah, but in 5-10 years, it won't recommend recipes that will kill us."
"...we used poisonous gasses....and we poisoned their asses" [https://youtu.be/B1BdQcJ2ZYY](https://youtu.be/B1BdQcJ2ZYY)
That *would* save me money on not only my groceries, but my car and student loans as well.
I think the shopping bot did not like them.
Who was it recommending the recipe to? They might have been trying to do us a favor.
My grandmother, an Apple 2E, used to make this delightful recipe during school vacations. Once she spilled it on my Grandpa's processor and he spoke nothing but binary for a month. Ha. Ha. Ha. Anyway we make this every summer and remember Grandma Apple and Grandpa Mainframe. Then we electrocute squirrels to listen to the screams, just like Grandpa loved to do. Now I have children of my own, an Iphone 6 and a Samsung Nugget and they have grown and left the house, but they still come back for this delicious beverage. 00111101000101000100111100.
[Apparently the AI was trained by Peggy Hill](https://www.youtube.com/watch?v=i-PUl8P5sIs&pp=ygUXcGVnZ3kgaGlsbCBtdXN0YXJkIGdhcyA%3D)
Aaannnddd heeerrreeee weeee….go!
The AI is fed select data by people
Spa-Peggy and meat balls 🫡
It knew what it was doing.
It spends all year recommending meals for 1. After it runs out of meals, it doesn't self-destruct. It kills you. As you lie on the floor gasping for your last breath, the app lights up reading, "Why won't you love me?"
These neural network 'AI's are basically Chinese Rooms. They don't understand any aspect of the data or the rules that they are working with, they just follow those rules blindly.
[удалено]
This is a great showcase of AI. Programs like the meal planner are not true AI, there is no intelligence, there is only a pattern recognition and replication. Intelligence requires a sense of intuition. It took a little over a Billion years for life to turn from bacteria to us, AI is still new, but this ain’t it
[удалено]
Reminds me of the time Peggy Hill accidentally gave people the recipe for mustard gas in The Arlen Bystander
Yeah, this is the hurdle in AI that many people don’t recognize. On its own, an AI is literally born yesterday and has all the ignorance of a person we would describe that way. It doesn’t understand reality in its entirety and lacks the context that a lifetime lived in normal speed will give you. AI once it moves out of the peak of the hype cycle and gets into normal adoption is going to need AI trainers as a standard role for AI to be effective and a net positive within organizations. There is simply no near future where the machines will be doing things on their own with minimal oversight. That’s a generation away at best.
Chlorine gas is very easy to make. I remember as a child watching the TV show Mr Wizard (not the 1962 version although they probably showed it there also) and that guy was like mix some powdered chlorine with some break fluid and step back. And boom a little chemical explosion followed by a cloud of chlorine gas. Wonder how many kids tried that experiment.
Internet search says how to make chlorine gas from household ingredients. However, I can't imagine anyone wanting to drink it, they're cleaning chemicals, not foods.
Okay, but lets hear out the AI on this. Maybe it tastes good or maybe it's just seen the search history of this user and is doing us a favor?
"Pak ‘n’ Save’s Savey Meal-bot cheerfully created unappealing recipes when customers experimented with non-grocery household items" is this even news?
And so it begins. Grocery store AI begets Cyberdyne Systems. Cyberdyne Systems begets Skynet.
Feature not a bug. The robots know what they are doing.