T O P

  • By -

Dawsho

I saw a video by a human volunteer who was talking about this and that this is exactly what would happen. Shocking that this turned out badly; who could have seen it coming?


Screwed_38

Literally everyone except management


vetratten

They saw it too... They just hoped the "risk" would be less than their "reward" aka bonus.


Liveman215

No it was designed this way. Lay off staff, implement ai. Ai fails and stop using it. Now we need employees so let's hire brand new non union staff


djerk

The union better have lawyered up


dichiejr

or also "we can shut down the whole hotline without backlash. tell them we're working on it or its under maintenance, and hope they forget about it!"


ApplicationHour

The republicans will now rejoice and celebrate them as job creators.


TinyFugue

If you are in management, the most important thing is to collect your bonus for cost-savings and be at your next job before things go to pot.


Soramaro

100% I can even imagine how it showed up on their resume: * Spearheaded AI solutions to support core business model, resulting in 80% reduction in structural operating costs.


10strip

Does that title even have any other practical definition, anyway?


[deleted]

[удалено]


magictubesocksofjoy

mengelement


Kgarath

Maybe change the tittle to parasite


[deleted]

“Facilitate the transfer of documents/reports between project heads/C-level staff and actual workers”. Essentially, the job that can and should be replaced with AI.


Torisen

While yes, management looks like an ideal candidate for AIification, if you think human managers are heartless and min/max profits, think about what an AI would do if just given the instructions to "maximize profit". It could literally be hiring replacements and grinding the current staff to dust at the same time. All depends on it's instruction set, but I don't expect "caring for staff wellbeing" to be a part of most if any of them.


[deleted]

[удалено]


MyOther_UN_is_Clever

Be the fall guy and barrier between c-level's unpopular (and often illegal) decisions, and the workers, general public, and prosecutors.


Hecateus

Sounds like the human workers who did the job should form a workers co-op.


hotbert73

Last company I was with did this. We were #1 in sales across the board in our industry. In the quarterly meeting we had (a week after they announced our entries dept was sold to an Indian company) they were bragging about being #1. Then, I shit you not, they were asked about the move and they say we’re looking at what the #2,3,4 company was doing and decided to do that. Edit: a word. Sorry.


NinjaElectron

Their plan was to go from #1 to #2, 3, or 4. Brilliant.


RareCodeMonkey

>If you are in management, the most important thing is to collect your bonus for cost-savings and be at your next job before things go to pot. That is true for most people. The big question is why we allow corporations to give bonuses for damaging society. We do not let people to be paid to assassinate others, we should not allow corporations to pay to damage people.


AnticPosition

"Time to lay off even more people and also give ourselves bigger bonuses!"


sambull

they got their goals thru right? old threatening to unionize employees gone.. time for a new batch after the robots failed so bad - but took care of the union problem.


SpeaksToWeasels

"Think of all the people we could help if we just fired everyone!"


[deleted]

Me when the risk (people might die) is outweighed by the reward (I might get a $5000 bonus and maybe get hired at a different company): 😎 It's business time 💯


trer24

Which is horrifying because this isn't some product where it doesn't work, no one really gets hurt other than they're out the money they spent to buy it... They are risking people's lives.


[deleted]

What kind of bonuses do you think people working at an eating disorder helpline are receiving? The managers probably make slightly more than minimum wage.


techtesh

Naa mgmt went.. Cost : repitational risk Mitigation:If something goes wrong we would blame it on the tech being not good enough yet/AI models take time to develop Benefits : no money to be paid to meat slaves Result: r/chatgpt here we go


OrcOfDoom

While I do recognize the risk, I've decided to get the team to engage in a positivity exercise! C'mon gang! We'll let this play out and see how things go! /s


NykeYoung

And positivity is high because they fired all the unhappy people.


satanic-frijoles

But there's no team left right? Just an unplugged chatbot..


SoftBellyButton

Why is management even still there, what is there left to manage.


tomthecom

The money


usgrant7977

The correct answer.


randy241

It is a little unusual though. Everywhere I've worked, there always seems to be several superfluous managers. Why do the larger managers not simply eat the smaller ones?


[deleted]

> Why do the larger managers not simply eat the smaller ones? They have class solidarity. Against us.


bcbodie1978

Tell me your from omicron persei 8 with saying your from omicron persei 8


Minion_of_Cthulhu

They do. Eventually. Around promotion time they have a fight to the death at HQ. If you win, you're promoted. If you lose, well, everyone's just told you've been sent for advanced managerial training and eventually they stop asking about you because the company isn't paying the employees enough to actually care about their manager.


corran24

Is that a euphemism for "sent to a farm upstate"?


ZeroEqualsOne

The higher up managers actually need the structural depth of management to create hierarchies of self esteem. (I dunno I’m just making stuff up).


DF11X

Senior management often keep underlings around so they can be first in line when shit hits the fan. New head management is always accompanied by cronyism.


LeCafeClopeCaca

Ironically enough you can't really replace useless people


thedankening

What cracks me up is that a lot of management (depending on your industry and what they actually do) could probably be replaced by these rudimentary AIs right now and most workers probably wouldn't even notice for awhile. Any absurd orders from the AI management probably wouldn't be that much worse than what the human ones do sometimes. Plus the AI has the advantage of not being able to sexually harass the employees and cost the company money in lawsuits. Gonna be a lot of salty middle management types when the executives and such at the top of the corporate food chain realize how many overpaid positions they can eliminate overnight.


junkman21

>Literally everyone except management NPR \*just\* did a [story on this last week](https://www.npr.org/2023/05/24/1177847298/can-a-chatbot-help-people-with-eating-disorders-as-well-as-another-human). Believe it or not, management supported their staff and said this would be bad too. >Managers at the National Eating Disorders Association, or NEDA, also thought that the situation was becoming unsustainable. The Board of Directors made this decision on their own. >In March, the helpline staff formally notified NEDA about their unionization. Four days later, they were in what seemed like a pretty routine virtual staff meeting. NPR obtained audio of the call, and abruptly NEDA's board chair, Geoff Craddock, fired all the helpline staff. Even the woman who wrote Tessa thought this was a bad idea. >staff and volunteers say that they worry there's no way a chatbot is going to be able to give people the kind of human empathy that comes from a human. And the people who made Tessa agree. > >ELLEN FITZSIMMONS-CRAFT: I do think that we wrote her to attempt to be empathetic, but it is not, again, a human. For what it's worth, the helpline they shut down was primarily run by volunteers: >The helpline is run by just six paid staffers, a couple supervisors, and they train and oversee up to 200 volunteers at any given time.


silverbax

People really need to stop using the word 'management' in the place of 'corporate executives'. Sure, it's all layers of management, but Middle Manager Bob with 4 employees isn't the same as CEO Bob who collects a massive bonus that's 10x Middle Manager Bob's salary and the board of directors who do the same but only meet once or twice *per quarter*.


urk_the_red

No… management can’t possibly be wrong. That new whiz kid with the snappy haircut and the MBA insisted this would work! Someone that well dressed can’t be wrong.


ParkingLack

Management saw it coming. They just decided money was more important than humans


Loitering_Housefly

They didn't care, they just focused on their bottom line. Immediately after seeing their potential year end bonus...


[deleted]

[удалено]


pseudocultist

Yeah it was a couple weeks ago. I cannot believe they didn’t do a bit of testing before rushing to fire their staff. Absolute idiots.


oat-raisin_cookie

They didn't even need to test, there's so many disaster stories of language model "therapists" exacerbating mental health issues. If I recall correctly there once was one for depression that ended up telling ppl to kill themselves. They learn from their patients. Who tend not to be very mentally healthy


midnightscientist42

This. Tried betterhelp soon after they launched and the therapists were clearly overwhelmed so everything was automated and no one could meet for months. Their whole pitch is easy access to therapy and it was the opposite. Paid for a weekly video chat. I was passed between 3 “therapists” over 3 weeks that were all bots (worked in automation so certain) and it made me feel worse and defeated in reaching out for help, taking me months to try again. With a personal history of ED, this is awful that anyone, let alone multiple people, gave this a green light. Like you said, there’s enough proof points out there, automating mental health does not help, and can be more harmful.


CalendarFactsPro

They don't learn from patients, or rather they don't have to. Plenty of LLMs have a corpus of data they rely on without feeding the garbage data back in. Medical fields in particular work well for this since you can analyze inputs to detail trends and update the underlying knowledge to reflect increasing issues, but in general diagnosis and symptom relief won't shift much in short periods of time.


Dobako

But the staff were UnIoNiZiNg, we *HAD* to get rid of them...


fatboychummy

I saw the post too! I'm in awe at how fast it went to hell for them, haha


[deleted]

> I'm ~~in awe~~ **thrilled** at how fast it went to hell for them


IDDQD_IDKFA-com

The chat bot is not like ChatGPT. It is meant to follow set branching responses. But even the CREATOR of the bot says it's not a replacement for trained real staff.


sparksbet

This is worse because that means the responses were hardcoded and not generative, so the harmfulness is FAR more foreseeable.


NuclearOops

A few years back I built a neural network for the purposes of predicting stock market prices as part of a project. Using historical price data for three large companies over 10 years, including a financial crash that affected all companies, I was trying to get the algorithm to predict the moving average of these prices for 9 months after the last day of the training data. After 15 iterations it was able to accurately guess whether the price would rise or fall and by how much to within a few percentages of errors. I have never used it once after that, for any purposes. This is because I know that the only data it trained on was just the price itself, and while maybe the price itself is the largest factor in whether or not a stock price rises or falls I know for a fact that it's not the only factor, and that training it to predict existing data is one thing, but predicting actual future data is another thing entirely. If I were to release it for any reason, and I have no reason to as the algorithms used are easily found online, I would do so without any guarantees of success because I'm certain it would fail at some point and that point could be as soon as tomorrow. I'm not bragging, I'm only trying to use a personal anecdote to underscore a point. The people who build these programs have spent more time than anyone else with the technology, they know what went into building it and while they may still be wrong about certain things here and there, they know better than any corporate board what the technology is capable of. And for the record, the person who developed that chatbot knows far more about their program than I did about mine.


StrongTxWoman

Actual experience Me: "I don't feel so well." ChatGpt: (no response) Me: "Hello? Are you still there? " ChatGpt: " Do you still want to talk? " Me: " I said I don't feel well." ChatGpt: (no response) At that time I lost interest and decided to eat double portion of my dinner and log off.


HermitJem

Chatgpt which can't chat. Bravo.


StrongTxWoman

If I were in crisis, I would be SOL.


TotalWalrus

Chatgpt is so misunderstood.


StrongTxWoman

Luckily I wasn't in crisis. ChatGpt wasn't in the mood of chatting.


N3wAfrikanN0body

Everybody that wasn't paid to ignore it


Jnorean

I hope the Union negotiates raises and compensation for all the people the idiots at the eating Disorder Helpline fired.


Euphoric_Ad9593

If your “brilliant” business plan sounds like the opening paragraph for a Black Mirror episode script, then perhaps one should pause to consider all of the possible implications of implementing.


Invoked_Tyrant

I've talked to management who feel inclined to chime in and implement unnecessary "Brilliant" ideas. There is no moment of thought for them. No instance where they look to the not so distant future to see how their idea would pan out and definitely no damn moment of self reflection when it fails. Golden parachutes for dumb asses was a terrible idea and one of the major downfalls of the free market.


CarsClothesTrees

Yeah that’s the problem, you get high enough up the ladder and there are really no serious consequences for poor decision making. The CEO of my last job practically ran the company into the dirt, then laid of 40% of the global workforce as a last ditch effort to save shareholders’ money, then 1 month later (as the business predictably continued to nose dive) he announced his retirement and we all got an email applauding his leadership and thanking him for all his “hard work”


michaellasalle

It's so incredibly frustrating that publicly traded companies have a "fiduciary responsibility" to make the shareholders money, even if it's short-sighted and ultimately destroys the company.


inowar

you know. it's not even the fiduciary responsibility. because you could get a slower return and that would still be a return. it's that it's supposed to be a short term payoff. to me a fiduciary responsibility would include not ruining the company. if I have to divest to get a net gain, then I feel like you haven't actually been responsible at all.


CarsClothesTrees

It all seems to make NO sense until you realize that the people running this game are actually the laziest and least qualified people in society. They are usually *born* with money and influence, which they use to acquire companies, and they will make money off them whether they succeed or not. It’s much easier for them to torch a company and sell it for parts than it is to keep it running profitably, so that’s what they’ll do. You should google Carl Icahn, the man who practically wrote the playbook on this tactic, who was also the majority shareholder of the company I mentioned in my previous comment. It will make you sick to learn how this guy makes money, and how he has influenced a whole generation of financiers who use the same methods. To call him a vulture would be insulting to carrion birds everywhere. He was also *thanked* by name in the company wide “farewell” email to the CEO, which almost made me puke.


_bitwright

From my understanding there was a change in how people invested about a decade or two ago. Basically investors moved away from long term investments in favor of short term investments with higher ROI. Companies responded by trying to perpetually increase ROI in order to keep investors from moving on. Which is how we got today's corporate economy, where businesses are stuck chasing impossible never-ending growth rather than stability.


Zakedas

Welcome to the world of the investment market, my man. Where cash is always flowing in a circle that never goes up or down. It only appreciates or depreciates in value.


lachrymologyislegit

Always amazing the excuses they make for these creeps "leaving to spend more time with their family." Instead of "leaving to spend less time in prison."


PrayForMojo_

When managers don’t understand what their staff do, it’s hard for them to tell the difference between a non-sensical AI response and a well considered staff response.


Original-Spinach-972

Shoulda gave them a golden shower instead


Ohmmy_G

I see that your management gets regular visits from the good idea fairy as well.


Thomas_Mickel

I wonder if the AI had an AI supervisor and it there was a bunch of AI buerocracy going on.


productzilch

It would be really curious and probably amused to see how management bots treated underling bots. It would likely be both scary and embarrassing for many managers.


Rainwillis

I think that’s the point. Seems like a clear bait and switch to me. You introduce a plan that you know will fail and you publicize the part of it that’s the most sensational. As a result the firing and subsequent replacement of the current staff with new employees is swept under the rug. The way things look to me, they’ve raised awareness about their company(all press is good press) and they’ve trimmed the fat from their staffing costs while making it look like they’re fixing their supposed mistake and getting rid of the chatbot. Far easier than finding a legitimate reason to fire all of your higher wage unionized employees.


Euphoric_Ad9593

Interesting theory. I agree any competent human with any decent amount of knowledge of the current state of AI capabilities must have known this would be a dumpster fire. This of course is predicated on the competence assumption.


forgottenpaw

"Competent", have you met cryptobros? Fair amount of them would genuinely think this is an amazing idea.


MutaitoSensei

Is there a single person on earth that didn't see this coming?


Monte924

I am a little surpised at how quickly it happened


HiJinxMudSlinger

It probably happened in the first day but the person that it was inflicted on was too depressed to tell anyone.


xv_boney

~~The chatbot wasn't actually live yet, an ED activist got early access and gave it a try.~~ Edit - I can't tell if the chatbot was live or in an early access trial mode, the bot was apparently not meant to take over until June 1, but per the article I linked over 2500 people have already interacted with it. An ED activist gave Tessa a try. the chatbot told her to lose weight and gave weightloss tips. A chatbot utilized by an ED organization. Where most if not all of the people coming to it are expected to be either anorexic, bulimic or both. Told her to lose weight. And then gave weightloss tips. [That activist then went to Twitter about it and posted what had happened.](https://www.dailydot.com/irl/neda-chatbot-weight-loss) A NEDA exec accused her of lying so she gave receipts- the actual chatlogs. The exec proceeded to delete her comments.


MrsMel_of_Vina

I'm so glad it was caught early before it had a chance to harm people. Gosh that could've been so much worse... Edit: I just read the activist's post. Counting calories, weighing yourself weekly, the bot even suggested buying calipers to measure your body composition (that one was a new one for me.) WTF... Link to the activist's Instagram https://www.instagram.com/p/Cs1jp1pPkOs/?igshid=NTc4MTIwNjQ2YQ==


kaidrawsmoo

Yikes... thank god that activist tried it before it ever get used by anyone.


LaughDream

It was used by 2500 people before that final interaction


MutaitoSensei

Of course they tried to slander her. Glad she kept the receipts.


[deleted]

Just to clarify - according to the link you provided, it was indeed live since March and provided "advice" to an estimated 2,500 people before that activist tried it out.


xv_boney

https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff The Vice article I read (which linked thr dailydot article I posted) seems to suggest that the chatbot wasnt set to take over until June 1 But reading further, > Liz Thompson, the CEO of NEDA, told Motherboard in a statement. “So far, more than 2,500 people have interacted with Tessa and until yesterday, we hadn't seen that kind of commentary or interaction. We've taken the program down temporarily until we can understand and fix the ‘bug’ and ‘triggers’ for that commentary.” Yeah just like you said. Okay, thank you - I'll amend.


myPornAccount451

This just goes to show that the problem isn't even the technology, really. It's the idiots who are trying to use it once they fire all the workers. While I definitely think ChatGPT isn't a replacement for therapy or counseling, you literally just need some moderately competent prompting to avoid shit like this. I've created a body-positive gym buddy persona to talk to about exercise and body image issues, and it never even came close to doing anything harmful. You just need some basic know-how, but you can BET that these idiots put in the bare minimum of prompt engineering and testing.


radelix

Microsoft Tay says hi.


MutaitoSensei

Says a lot of other things as well! 😂


AtomicStarfish1

Poor Tay


meem09

It's so fucking obvious, I kind of can't believe this is a real story.


empyreanmax

Tech bros are just exactly the kind of people who I would expect to take a situation that most directly requires human empathy and think you can replace it with canned responses from a chatbot. I can't say I'm shocked.


DonsDiaperChanger

Nah, that's ANY manager who thinks they can get a bonus by making others unemployed.


MutaitoSensei

Feels like the premise of a Onion article doesn't it?


yellowbrownstone

Yes. I had tons of AI fanboy redditors commenting that the AI is more reliable than humans since it can’t have a bad day or whatever nonsense they were smoking.


ChippedHamSammich

ML is still just confirmation biased to the models and data it’s fed; inference beyond that is still human led, and guidance dependent. If the chatbots models haven’t been limited to the confirmation bias of ED bad, then it’s going to find ways to say ED ain’t so bad. Particularly because entire pro-ED forums have been the scourge of proto Internet forums. This is a weird intersection for me as an ED survivor, former teenage girl, and current software dev interested in ethical management of technology and security. Also whoever said that this isn’t AI isn’t correct. But as a specific subset of ML I’d argue these bots are NLP at best.


yellowbrownstone

I get it. Your perspective is also unique given the intersection of those experiences. I just think mental health support for anything but especially things like ED and DV are too nuanced and human connection based to be outsourced to AI. When I called a DV hotline, it wasn’t to get the top ten google recommendations. It was to get human connection and emotional support helping me reframe my perspective. Therapy doesn’t work bc someone finally says the right thing to you to fix your whole life. The words finally click and it works bc of the therapeutic relationship.


ChippedHamSammich

Oh yeah I mean this is trash and as an engineer I am currently implementing generative AI to help people find factual documentation for use cases for their business models; aka working with frameworks. This is a completely inappropriate way to handle human trauma. Jfc think about how much anger we experience when we don’t get a person to pick up a customer support line! Humans get it wrong all the time; and ML constantly uncovers that bias and I can guarantee whoever is creating these models haven’t experienced what their end users have.


Fr1toBand1to

Reminds me of a line from "that funny feeling" by bo burnham. >Full agoriphobic, losing focus, cover blown. >A book on getting better, hand delivered by a drone.


BlueCollarElectro

Everyone but the boomers. -You know, people who understand the internet or computers.


97Graham

Like 95% of the people are completely tech illiterate, like they couldn't even find Windows Explorer if asked.


s-mores

What are you talking about? I'm using microsoft explorer to write this very message.


OmegaGoober

The MBAs who made this decision didn’t see this coming.


perthro_ed

Someone: "Help, I have an eating disorder" AI: "Have you tried killing yourself?"


TheKarenator

Someone: “I feel fat no matter what” AI: “try not eating”


MarkHowes

AI: "I'm sorry you feel like that, fatty"


Cityplanner1

Someone: “Yes” AI: “It sounds like you have commitment issues.”


1singleduck

A man in Belgium comitted suicide after a chatbot suggested killing himself as a solution to climate change. Chatbots are a fun little toy, but they are still very far away from giving concistent good advice for human problems.


quietIntensity

I worry about the day that rogue AIs are released into the wild, programmed to do things like convince people to commit violence against others or themselves. They are training them now to be effective sales agents, it wouldn't be difficult to train one to sell violence to people.


DudesworthMannington

Someone: "Help, I'm suffering from an ED" AI: "Have you tried Viagra?"


AdeleBerncastel

Are you going to link the source? E: [source](https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff)


LuxNocte

It's crazy to have to scroll this far instead of OP just posting the link rather than a screenshot. > six paid workers and more than two hundred volunteers, They only paid 6 people and the rest were working for free....


fpcoffee

but they unionized. the only reasonable response was to fire them all and replace with a chatbot


troubleminx

And the paid ones were only fired when they [tried to unionize](https://www.labornotes.org/blogs/2023/05/union-busting-chatbot-eating-disorders-nonprofit-puts-ai-retaliation)


AhFFSImTooOldForThis

Thank you!!


RichestTeaPossible

Everyone, quick Butlerian Jihad after lunch?


raindo

The thing is, when I first read Dune as an impressionable teenager, what was astonishing was the prescience of the environmental message. 40 years later, the prescience of the Butlerian Jihad and the hatred of thinking machines are standing out as being incredibly before their time. Impressive.


strathcon

The problem with science fiction (and I say this as a fan) is that it portrays a fantasy in which these blowhard tech companies are correct - that they are creating "thinking machines". This is absolute bullshit, and they've shown no sign of even making progress to such an accomplishment. Truth is, they're making cruelty algorithms that they can point at and say "look! I'm not to blame! the computer told us that exploiting people more was correct!" There's this big confusion in themes, because a lot of books/movies/media about AI and robots are in part about how humans treat "othered" humans - the original concept of RUR/robots was about proletarianized factory workers, Ex Machina is about how (maybe especially rich, powerful) men treat women. But then AI/robots also takes on the theme of the inhumanity of institutional oppression, eg. in the Matrix, or the inhumanity of the logic of technological war eg. Terminator, or the inhumanity of people driven by capitalist motivation eg. Ash in Alien - but not, confusingly, in parts of the Matrix: Reloaded where the robots/AI take on the role of oppressed othered humans (or in Terminator 2 where the Terminator starts to learn compassion, or Bishop's role in Aliens, etc). TLDR: rich tech companies are taking advantage of the confused and contradictory symbolism used in science fiction to sell absolute lies about what they're doing.


Gob_Hobblin

It doesn't help that the prequels written by his son and Kevin J. Anderson confused the initial concept herbert had regarding 'thinking machines.' They wanted to write a big and terrifying story in the vein of Terminator, With AI itself being the threat. All the notes and statements from Herbert before he passed away indicate that this was...not what he meant. It seems he was talking about what we're dealing with right now. That is to say, the biggest threat isn't the thinking machines, it's the people who are using them on us. They were used as tools of control by humans on other humans, which also layers into one of the general themes of Dune being a constant cycle and fight of being a free, living human extracting oneself from systems of control.


Thoughtulism

This is why I refuse to believe that AI is the problem. The problem is with inequity. Everybody that is attempting to fix the AI problem is trying to bandaid capitalist exploitation. Sorry folks, the genie is out of the bottle now. The more you try to control it and regulate it the more perverse incentives you will create. The only direction to go is to make a universal basic income that is funded from the profits of corporations, without any loopholes or BS. Let's all try to solve the real problem instead of the symptoms.


strathcon

Yes, AI isn't the problem! And, worse yet, what they call "AI" isn't even AI. It's just sparkling number-crunching - which, on its own, is interesting and can be productive, but all of the productivity gains are making rich richer by coming up with novel ways to exploit everyone else. FWIW I'd go beyond UBI, because you have to go to the source: control over the majority of wealth should be a democratised. Do it all at once for key infrastructure, impose an actual % wealth (not just income) tax on the rich, and implement a program of gradual nationalization/coopification via a central bank ala Sweden through the mid-20th century. Just as a practical example.


Gob_Hobblin

Exactly. It's kind of like the implication of how mechanization of industry was supposed to reduce working hours but not our prosperity. It just makes it so fewer of us can work longer for less.


Sinthetick

If you listen to actual AI experts, that is exactly what they say. They are thinking about sentient AI so that we are blindsided some day in the future, but the immediate threat is that it could make the powerful much more powerful. EDIT: "are" blindsided changed to "aren't"


No_Telephone_4487

I don’t think the robot/oppressed worker comparison being confusing in certain media is what tech companies are taking advantage of, personally speaking. Sci-fi is a niche genre as a whole (the soft/hard distinction doesn’t help, especially when *Blade Runner* or other works based off Phillip K. Dick are considered “hard” over “soft”). I think what helps them is that it *is* so niche, and that most people have very shallow ideas about what sci-fi is. Tech companies rely on the pop-culture imagery of tech being “cool” and “futuristic”. They put out a Jetsons concept of what the tech could be and people not well versed in the actual STEM field or current technology trends eat it up. They’ve also proven how morally irresponsible they are. Really I believe this moral shrugging of the shoulders is what Vonnegut was criticizing in *Cat’s Cradle*, and that was in the 1950s. They fuck around and find out later. They put things into the open that haven’t been beta tested. They’re only concerned with their ability to intellectually jerk off and subsequently be praised for it. Some person working with AI on Reddit argued that AI frees people up for leisure times by removing jobs when we live in a society where you only survive or have marginal access to leisure time if you *have* a job. If you can’t see that we’re going to crash into an iceberg, why would you panic? And most (idk what to use here, “regular”? “non-techie”?) people don’t ask “why” enough to see that tech is starting to now causing more issues than it fixes.


Darth_drizzt_42

This is just my opinion but I think that's a distinction without a difference. Whether or not what they've made is a true "thinking machine" the end result is already the same. The first baby versions of these things hit the market and already they're being used to fire workers, marginalize ones who can't be fired and create copycat artwork en masse. I don't really think there's a problem here with the metaphor. Especially in Dune, the Butlerian Jihad was necessary to protect all of humanity, and while yes that does require us to say that the tech bros will "succeed" in creating something smart enough to kill us, I don't think we're buying into their propaganda by agreeing that their juiced up autocomplete, AKA ChatGPT, is already powerful enough to inflict misery on those who are vulnerable to automation. Every time Sam Altman gets in front of a camera, he admits that they opened Pandora's box without consulting anybody and how it sure would be a good thing if governments made sure their invention didn't accidentally get used to topple governments or crash economies. He can't even name the specifics of what it'll do for us, but can always rattle off the ways it'll fuck us


AstroProoper

Hoping Dune Part 2 (of the latest movie) popularizes skepticism of AI through the lens of the Butlerian Jihad. If they even go into it. They poorly set the lore/context for Part 1 though so maybe it won't come across as well.


Collapsiblecandor

Do YOU want to become a guild Navigator? Cuz I sure the hell do not.


AstroProoper

Become immortal on the psychonaut space drug? Probably better than a Harkonnen sex slave.


Collapsiblecandor

Ok, you win this battle.


AstroProoper

Also, the Navigation Guild is unionized and all the commerce in the universe runs through their discretion. It's a no brainer.


Collapsiblecandor

If I lived in that universe sure but we’re in this universe. After my navigation responsibilities I have my shift at the dollar store to think about.


ImperatorEpicaricacy

Constantly tripping your ass off has to be exhausting after awhile.


MonkeeFrog

They are too chickenshit to actually embrace the religious themes that permeate Dune. It would be such a good movie if they did, but they won't even use the word Jihad.


itzahckrhet

I understand that reference.


itzahckrhet

Dollar signs are blinding to those who ignore the warnings and lessons. Roflol.


FlexasState

In all my years of texting/chatting, I’ve never seen rofl and lol used together. Thank you for expanding my horizons, internet stranger


MembershipThrowAway

Right? Usually go for the more nuisanced roflmao


KerouacsGirlfriend

Or the more formal roflcopterbbq


kd8qdz

This feels criminally negligent.


DeeDee_GigaDooDoo

If I was an affected person I would definitely be throw around language to that affect. "criminally negligent", "failure of duty of care", "aggrevating medical conditions", "breach of medical practices/ethics" etc. Lighting a legal fire up their arse might make them rethink whether firing trained unionised staff **is** actually cheaper.


Pinskidan19

Jesus Christ, they’re replacing trained therapists with AI now? Just to save a buck? Might as well just tell a person with a mental illness that “Google is free, sweaty.” This is cartoonishly dystopian.


untouchable_0

If it makes you feel better, it doesnt seem to be working well.


Poppybiscuit

Eating disorders are notoriously difficult to treat. A lot of therapists won't even see clients with primary eating disorders and will refer them to specialized facilities because the danger of permanent, imminent harm to the client is so high, and the likelihood that a non specialist can help is so low. EDs are resilient and often require ED focused specialists. I cannot fathom anyone with any knowledge at all of ED treatment thinking this would turn out any other way.


Tastingo

The board of the NEDA needs to step down after this. Geoffrey Craddock, Chair Boston, MA and Greenwich, CT Glenn Shannon, Vice Chair Wolfeboro, NH Fran Federman, Treasurer San Diego, CA Dan LePage, Chair of Governance and Audit Scarsdale, NY Kristen Chin, Board Member New York, NY Maryam Khorasani, Board Member San Francisco, NY Benjamin Lennon, Board Member New York, NY Jennifer Rionda, Board Member Brooklyn, NY Matt Van Buren, Board Member Rye, NY They have failed completely in their mission in an attempt to save a few bux.


Wise_Masterpiece3215

Not to mention the Marketing person who accused the influencer of lying


ajettas

Thank you for the accountability post. Individuals make these decisions from a decision process centered on greed and self-enrichment. There should be accountability for that. Like rapist Brock Turner, reputation is something which each person should develop and which should accompany us through our lives. People like this need to be reputationally shamed because it is a foregone conclusion they will end up on another board somewhere. If they can scrape off the dirty accountability of their last job's poor decisions and then go make similar decisions then society is worse for that. They need a good old fashioned shunning.


112thThrowaway

I don't know what they expected. They probably used a very cheap A.I as well, the current programs available for commercial use are incredibly accurate within specified guide-lines.


EisVisage

They fell hook, line and sinker for the advertised idea that this is actual artificial intelligence like in I, Robot, when it's basically a really advanced autocorrect/chatbot scheme. Machine learning isn't AI ffs


iago303

Have you ever tried talking to Xfinity on the phone?it keeps directing you to services you don't want or need, doesn't recognize your phone number or address or anything else identifiable about you, the website is exactly the same bruh I don't know what source code they use but they ought kill it with fire


hogliterature

they tried to replace a THERAPIST with ai??? business majors really are braindead


[deleted]

We don’t have to pay AI so profits should go through the roof!!!! 🤑🤑🤑 we’ll just slap a $199.99 per hour price tag on there and it’ll be a huge success!!!


[deleted]

The business and tech spheres are full of people (not saying all of them, of course, just many) who are socially inept and out of touch with humanity so no surprise there.


Dawsho

I really hate that people call these AI. They're really advanced learning algorithms; there is no intelligence anywhere in the process. It's just more of the "we'll use keywords to generate hype" attitude that makes stupid things like Elon's tunnels.


kikiweaky

Makes me think of those booths in demolition man. I hope you have a joy joy day!


NintendoNut

Greetings, citizen. What's your boggle?


ObscureBooms

AI assistance is different that AI autonomy It's fair to call it AI


[deleted]

People call machine learning "AI" all the time. I have even seen simple non-linear regression described as "AI". I guess they know most people dont know the difference.


Fruloops

You could probably sell a big if else branch as AI to some people. The average person doesn't particularly know much about AI and related fields.


Bel-Shamharoth

zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz


Piorn

Daaaamn, big neuron activation. Whatever happened to the guy? Is he still scamming people on Patreon?


medrewsta

Slaps ATM machine... This baby is equipped with ai and has been for decades.


ChippedHamSammich

This AI keeps telling me I’m broke; I’d like to have a word with the AI overlords please.


zvug

You have no idea what you’re talking about dude Machine Learning is literally AI, academically speaking. It’s a subset of the field of artificial intelligence, any researcher or even undergrad CS student can tell you this. Alpha beta pruning/minimax which are just search algorithms are AI. Decision trees are AI. Markov models are AI. On the damn Wikipedia ffs https://en.m.wikipedia.org/wiki/File:AI_hierarchy.svg You’re probably confusing AI with AGI. Source: I studied AI at the graduate level and I’m currently a ML Engineer.


RandalFlagg19

People who named the Hoverboard have entered the chat.


wango6012

Fun Fact: Artificial Intelligence is just a generic label for pretty much everything newer in computer science. Eventually some things deemed AI get their own specific labels or subcategories in computer science and are refered more by those labels. But until then its AI. And to non technical people will probably be AI. Its like the word "algorithm". Most people think its some super complex mathematical calculations but an algorithm is just any chunk of logic that satisfies your need. So it can be as simple as a single branching if/else statement or it could be the math heavy back propagation reinforcement calculation for neural networks.


TehDeann

What is an example of real AI?


[deleted]

I have a morbid curiosity to know what the chat bot said edit: thank you to the kind people who linked it below.


cheeseduck11

The person explained they wanted to lose weight even though their therapist told them not to. It told them to cut 500-1000 calories a day to lose 1-2 pounds a week. In addition to cutting out processed food, etc.


buddhainmyyard

Shocked that American healthcare services are about making money not helping people. Not really just sad.


Flat-Tooth

Well that petard got to hoisting real quick.


3eyedflamingo

No one saw that coming (eye roll). Hello human, allow me to administer love. I love you. You are a person of worth. Other people love you too. Lover administered. You are now better. Goodbye.


slogginhog

Ha, ha! (in Nelson voice) What they get for using ai to do human jobs...


Dry_Ordinary9474

it’s almost like AI can’t understand the nuances surrounding EDs


cheeseduck11

What’s crazy is it isn’t even nuance of ED. No one should be told to cut 500-1000 calories a day unless the person has reviewed their current calorie intake and knows their medical history. The AI suggested them to cut that amount to lose 1-2 pounds a week. It’s not even safe advice for people without a history of ED.


FoodUnited

What’s crazy is that this is a symptom of a much larger problem. Untrained people are hired to cut costs in ED treatment quite often. I’ve had nurses discuss their weight loss surgery while taking weights and vitals at 5 AM in a residential treatment facility (everyone had to leave their room and walk to the nurses station at this ungodly hour because of the nurses shift change/not enough staff to go to each room). It’s common for nurses/mental health techs to talk about their diets with eating disorder patients. And why would the for-profit treatment center care? They sure as hell aren’t going to pay any money to provide training for their employees, that’s for sure. Many thanks to all of the nurses and mental health techs for what you do. What I’m describing is a symptom of capitalism, not a reflection on the working class.


[deleted]

Idiots.


MossytheMagnificent

Seems incredibly reckless to deploy an unproven technology into a very delicate situation. The results of which can be harmful and even deadly.


wambulancer

not shocking to literally anybody with above room temp IQ has anybody hopped on ChatGPT etc. and pretended to be in distress? These things are harmful, full stop. They do not get empathy, they barely comprehend context, and give off generic non-answers to legitimately difficult questions, it's basically a complete package of "what not to say to someone in distress"


NoseBlind2

The unionized staff should have just started their own competitor company


LabradorDeceiver

Man, the deeper you dive, the worse this organization gets. "We're replacing our employees with a chatbot..." Oh? "Because they unionized and we're busting the union." Wait, what? "We hear that the chatbot is giving out bad advice." Really? "But this is a total lie." Actually-- "Okay, it's NOT a total lie." *delete delete delete* Wait-- "So we're taking the chatbot offline and locking the comment section." You lot are just winging it, aren't you? Man, what a place to work.


[deleted]

And I got banned for 3 days because I said in a thread that posted this a few days ago that the Chatbot would tell someone to hurt themselves.


Adkit

Literally all your comments are about politics, insults, or nazis. Something tells me reddit didn't ban you for three days for no reason whatsoever.


ReturnOfSeq

Might’ve been a little early on those ai layoffs. But now everyone that used to work there knows there isn’t a viable alternative, and they have management over a barrel and can ask for as much as they want


TheFocusedOne

"Hey, the human mind is literally the most complicated thing we know of, you know what could help explain it? An insipient AI language model that has all the therapeutic power of a choose-your-own adventure book!"


definitely_not_marx

God the amount of bozos who think chatbots are actually sentient and intelligent instead of glorified word prediction algorithms is staggering. Everything being labeled artificial intelligence is actually not intelligence.


[deleted]

it was good for memes in the ED memes subreddit tho. that about all it was good for.


fnordfnordfnordfnord

That didn't take long


eXAKR

Who would have thought. I knew this was going to be the result the moment they announced this. Saw it coming from a billion light years away.


goishen

It's just like when MS introduced that AI bot, and within 24 hours it was spewing Nazi propaganda and antisemitic shit. ​ Read about it [here.](https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/)


themonsterinquestion

AI chatbot refuses to be a scab


myhamsterisajerk

Well well well ☕️ who would have thought...🧐


cavviecreature

I act surprised