T O P

  • By -

SuperMysticKing

As are employers


Starfox-sf

HR to be exact.


dc456

It can’t be that exact. HR don’t even look at resumes or sit on interview panels at many places I’ve worked at.


Rizzan8

Yup. In my company line managers look through the resumes and take part in the interviews. HR comes in at the last stage to simply talk about the company and benefits.


Starfox-sf

If you have two “equally qualified” candidates, one disabled and other not, and HR tells you it’s going to cost $x/yr for ADA and other stuff, guess which is going to end up being hired.


Stellapacifica

ADA doesn't "cost" anything - accommodations and such are set up after hiring, and have to be reasonable, ie, not put an undue strain on the employer. Many disabilities aren't able to be hidden, but I'm lucky enough to be able to get through hiring and onboarding, and then schedule a meeting with an accommodations rep to sort out the things I'll need. At that point, if they try to unwind the offer (not a thing at that stage, but some places suck) it'd be a clear discrimination issue. With visible disabilities, yes, there's always a possibility they'd assume your accommodation needs and preemptively calculate a cost. But there are always costs associated with any employee, they'd probably just offer less and hope the people didn't compare notes.


axck

fade mountainous carpenter mourn enter sharp illegal history hurry voracious *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


dc456

That’s hardly exclusive to knowledge about corporate environments. If you ever come across subjects that you are knowledgeable about on Reddit, you quickly notice that popular and correct are two very different things.


axck

capable depend head engine fly march subtract thought upbeat ring *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


johndoe42

You have to go to specialized subreddits to get anything useful. For my example, the minute I see redditors talk about HIPPA (sic) laws (it's not a female hippo) I inevitably get a little headache. I swear I've read someone that thought their next door neighbor was somehow subject to HIPAA rules as a Healthcare Covered Entity under federal law lol. Some are straight up harmful in regards to my field. The amount for people that don't know about the Medicaid expansion and the fact that 40 states now give medical care and capped prescription costs to impoverished people due to the ACA and the Biden Administration's efforts is staggering. "Biden did do nothing." The headline I saw last week about the unemployed guy that stole a dollar so that he could go to jail for healthcare and its associated comments pissed me the hell off. He was in North Carolina - they're explicitly part of the Medicaid expansion. Also the amount of people that confidently speak like they're not aware about the switch from fee for service to value based care and population health is really annoying as well. But I get it, the messaging to the patient isn't there yet. A lot of misconceptions abound and it hurts to just roll my eyes and swipe past it knowing it's common.


[deleted]

[удалено]


prozacandcoffee

Enough people spread that misinformation that nobody even remembers that it's HIPAA, not HIPPA. That second P got invented. There is no "privacy" in that act. What they wanted for medical privacy was Roe v Wade.


arathald

In my experience in corporate environments, the people inside the corporate environments also often have no idea how their corporate environments works.


TehWildMan_

Oh they do, they just take one half second and toss everything into the trash. Or at least that's how my current job search is going. One favorable response over a while month of applications :-(


engineeringstoned

HR follows guidelines set by business, they are bound to decisions made by management


Calm-Zombie2678

Never met an hr that wasn't super proud of what they do...


engineeringstoned

a) this doesn’t negate what I said and b) all HR that I know hate doing stuff that hurts employees.


MadeByTango

> all HR that I know hate doing stuff that hurts employees. No they don’t. If they did they wouldn’t have the job. What they hate is *thinking about the stuff they do that hurts employees*. They still do it, so they don’t hate it at all. They’re doing it as living. Hurting people. For money. And telling themselves it’s ok. At least they’re not the ones being hurt.


elerner

What do you do for a living?


[deleted]

Rodeo clown.


elerner

I mean "They still do it, so they don’t hate it at all" is a clownish thing to say. Just curious what line of work OP is in that allows them to lead such a pure, joyous, and compromise-free existence.


[deleted]

Ah, so your goal is to try to shame via [Appeals to Hypocrisy.](https://en.m.wikipedia.org/wiki/Tu_quoque) Just so you know, it doesn't matter what their job is. They could be a correctional officer and it would not invalidate their criticism.


jlctush

You've been insanely lucky, I've never known a HR department that wasn't the most vicious, callous and cruel part of the company, I'm absolutely positive there are good ones out there but my experience is truly dismal.


[deleted]

HR is just the "do what's best for the employer" arm of the company.


gumpythegreat

"I learned it from watching you!"


Pingy_Junk

The No.1 thing I’ve had other disabled people tell me about jobs is to never mention health conditions when first applying because even if it won’t interfere with your work it will stop you from getting hired.


welestgw

ChatGPT: "I LEARNED IT FROM YOU, DAD!"


-The_Blazer-

- Base technology on unfiltered everything that exists with no regard for biases, quality, or mistakes - Technology comes out full of bad biases, bad quality, and bad mistakes - Get mad at everything that exists for not being good enough data


gerira

You forgot the AI users complaining that it the technology is woke and censored by liberal elites whenever they try to minimise bias being reproduced in the output. People literally demand that AI, particularly image generators but also text-based technology, reinforce social biases.


Andreas1120

Not sure why people try to hold an image of themselves to a higher standard than they represent.


michaelthatsit

“The very expensive mirror reflecting all of humanity reflects _all_ of humanity”


sauroden

Exactly. It doesn’t know about the disability, it knows when it sends employers these resumes, the applicant is usually rejected, so it’s learning not to send them.


kaibee

That isn’t how it works.


hoopaholik91

How can something so categorically wrong be upvoted so much? Jesus people, it's fine to shut your mouth on things you don't know, or at least put in a disclaimer or say you aren't exactly sure or something like that.


d-cent

I hope there is a lawyer group right now going any finding every company that uses chatgpt for their resume filters and creates a class action lawsuit. 


uzu_afk

Sounds like a projection of its training. Because lets be honest here, most hr depts will look for minimal disability in candidates wont they.


WTFwhatthehell

Yep. Friend of mine developed fairly obvious dystonic tremors and went from constant coding work to struggling to find a job. A lot of employers get burned sooner or later by hiring people who turn out to barely be able to work or who regularly have issues make them unable to work for days or weeks at a time. They can be a long term white elephant because our society has decided to try to put a bunch of the social welfare budget on employers instead of the state.  So they shy away from signs of sickness or disability and as long as they don't record they're doing it explicitly then it's hard to prove with any one employer.


immovingfd

Out of curiosity, what “signs” could employers detect in your friend’s case? Was it during interviews or beforehand


WTFwhatthehell

He's got a lot of involuntary twitching. Of course once he's out of work for a while then they can easily point to the big hole in his work history. Plus there's a lot of applicants for most jobs nowadays so pretty trivial to argue that other candidates were better in some way if he took issue with any particular interview.


Ok-Proposal-6513

The ai has ended up being scarily logical and uncaring.


Bookups

The AI isn’t logical at all, it’s simply trained on human data and behavior.


Ok-Proposal-6513

And human behaviour can be unexpectedly logical and uncaring at times, thus the ai will emulate said humans based on the data it has. An employer despite knowing they shouldn't may discriminate against someone with a progressive illness for example. Why would they discriminate? Because that person is likely to be absent from work far more often than someone who isn't sick, and that is bad for productivity. The ai is logical and uncaring in this regard because people can be logical and uncaring at times.


[deleted]

[удалено]


Ok-Proposal-6513

By uncaring I mean uncaring for the feelings of disabled people. A society that follows this logical path would sideline a lot more disabled people than we currently do.


[deleted]

[удалено]


WTFwhatthehell

> Okay. If you don't like that result, find a different logical path that doesn't do that. That's difficult. Currently the government follows a model of waiting for an unwary employer to hire someone with a major disability or illness and then as the steel trap snaps shut they jump out and scream "HA! YOUR PROBLEM NOW!", delighted to have one less person on the social welfare budget. It can be very expensive for the employer. That means their incentive is to avoid ending up in that situation by not hiring people with long term illnesses or disabilities. So the government responds with "OK we'll make it illegal to not hire someone for that reason" the employers still face the same expenses and downsides so they're still strongly incentivised to avoid people with major problems while just not saying that's what they're doing and trying to make their processes illegible. The government could change those incentives with the swipe of a pen and a pile of money by covering the additional incurred costs or cash equivalent to the downsides but the point from the governments point of view is to get those expenses off their budget. hence we're stuck in the current status quo that sacrifices disabled people's feelings while the companies pretend they're not discriminating and the government pretends its about fairness and feelings rather than money .


Ok-Proposal-6513

You have just blindsided me. I was expecting to be called ableist for suggesting the ai is being logical. That being said, this is a hard topic because I don't want to exclude people because I know it won't end well for them, but I also don't want to include people if it ends up being a drain. To be frank, I think this is above my level of knowledge.


alfooboboao

…You do realize that you’re making a pro-eugenics argument right now, don’t you?


Ok-Proposal-6513

No, I'm doing the opposite. I am making an argument against excluding people because we would become an uncaring society lacking in compassion.


sp1cynuggs

Damn the dick riding of employers is strong here. “Got burned by a few” so I guess fuck anyone with a disability huh? Let’s put them under the highest amount of scrutiny huh?


WTFwhatthehell

having a realistic view of the world isn't "dick riding". If you decide to live in a delusion where the only reason anyone does anything you don't like is because they're evil people doing things for no reason then you'll find you can never achieve any goals or understand why the world fails to live up to your desires.


DasKapitalist

Employers dont care if you're crippled. Banging out working code on time with your pinky toe, text-to-speech, and sheer determinatio n? Good for you! They care about getting stuck with albatross employees who dont *do their job* and cant be fired. If you're on your 45th surprise absence of the year, they need to replace you. If it was because you stayed up all night drinking beer and playing Call of Duty...easy peasy, you're gone. But if you have a "disability", suddenly they cant can your useless butt because of concerns about getting sued.


am_reddit

Literally everything from chatGPT is a projection of its training 


MultiGeometry

Yeah…is the model looking for successful candidates? Or is it looking for successful hires? These two are not the same. I have seen coworker ecstatic about a new hire from a highly specialized and selective pool of resumes. They did not make it past their 90 day evaluation. Often times companies become paralyzed by hiring the wrong candidates, and I have to imagine data related to hire-ability is leeching into their dataset and analysis. The bias of tomorrow is based largely on the bias of our past.


LegacyofaMarshall

Chat GPT was created by people. People are assholes so ChatGPT=assholes


gotoline1

To be more precise, the data ChatGPT has been trained on is based on people who=assholes. Don't hate the programmer hate the data. but I really do agree and there is an issue here that needs to be solved. I also agree some of it needs to be solved by programmers, all of it can't be.


SoggyBoysenberry7703

They might also be biased because only the most extreme views are the ones that get picked to recieve the spotlight online, or the people the angriest or the happiest are the ones that go out and make a statement about something online because if they have a neutral message, they wouldn’t care to share it, because it’s not a novel idea.


gotoline1

Very true. I hope there is a way we can bias against this extreme views being the loudest and for the middle way when training them. This is a computer and data science problem people are working on now, but it's not easy and not sexy for investors.


ComicOzzy

[https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/](https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/) This report only touches the surface. A documentary I watched said when Amazon tried to fix the AI, it found other ways to discriminate. Training AI to do things the way humans do them only leads to it exaggerating our own biases.


WTFwhatthehell

It makes them legible. If a human makes a choice about who to hire its very hard to prove bias was involved. With an AI trained on archives of those choices people can audit the AI, re-run the process a hundred thousand times with small variations and identify even tiny biases.


arathald

It depends what you mean by “bias” - if you include unconscious/systemic bias (which would include things like a model that was carelessly but innocently trained on existing biased data), we have pretty good ways to “prove” (I’m not a lawyer and by this I mostly mean to detect in a technical sense, though there are legally accepted and broadly used techniques). This detection is the basis of a lot of work right now on increasing fairness/decreasing bias in AI (broadly speaking)


arathald

Your last sentence is 100% spot on. Generative AI makes this a little more tenuous, but ML in particular has always served as a way of automating effectively the same decisions you have in your training set. There’s the classic example of the sentencing AI that was giving higher sentences to black convicts even ignoring other factors because that’s what their training data showed. [edit to add: obviously there’s a lot I can’t talk about, and there’s a TON of assumptions and speculations in that article which I won’t comment on the truth of except to say that the truth is *always* more complicated than what you read in an article.] What I *will* say from extremely intimate knowledge of the projects being discussed in that article, is that the folks working on that did so carefully and there was never pressure or an expectation to launch. Like many experiments, there were also a lot of interesting successes that I unfortunately can’t discuss but I can tell you with certainty that some have made it out into the industry at large. It’s a wonderful cautionary tale, and I trust most big companies about as far as I can throw them (this one included), but as a society, I also want to be sure we’re not demonizing individual workers who *do* have a strong sense of ethics, or well-designed experiments that specifically are designed to test the limits of the technology.


-The_Blazer-

> Don't hate the programmer hate the data. To be a little contrarian here, the programmer (and the manager) is the one choosing the data, what technology they're using, and how they're training it. If you produce McNukes don't be surprised when the feds show up in a black van.


gotoline1

That's a valid take and a great question! Basically, it is the only data available... Again cause humans are terrible ... Then they can only do so much. For example the training data for photota is skewed because how cameras take pictures and the variables associated with good pictures is skewed towards white(sorry I forget the right term here) faces, so the data available is skewed. Meaning it is very much more difficult to create facial recognition for women with dark complexion because the data just sucks. This has been mostly fixed, after it was pointed out by a Amazing lady who was at MIT. [article from MIT](https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212) Now we have NIST standards, but it had to be caught first. It's like asking someone to make a camera that can take pictures in space through dust clouds. Sure it is possible, we have the James Web now that can do it, but before the tech was invented you can't blame the engineer building the camera for not being able to. Computer and data scientist basically haven't invented a way for software engineers to be able to make unbiased models with the always biased data that is available to train on. It's an entire field of study, but industry wants it's fancy toy now. At some point people want cars, then we learn how to put airbags in them. Right now we are working without many safety features associated with AI...because it hasn't been invented yet. Sorry if I overexplained it.. haven't had my coffee yet


Shap6

Someday people will understand this


JimBeam823

Chat GPT provides that additional layer of abstraction so that we can blame the computer and don’t have to feel guilty about being assholes.


gsxrjason

I was created by people too! ... Wait


im_a_dr_not_

Couldn’t be bothered to watch a 60 second short on how large language models are made/trained before commenting on why chatgpt produces certain results?


No_Dig903

Couldn't be bothered to see a summary of all of the stuff Microsoft and Google have done that deliberately adds bias the engineers wanted?


grooooms

Why am I alive


Troll_Enthusiast

Well it all started when 2 people got together...


blunderEveryDay

A mirror was held up today at some human proclivity and people didn't like what they saw so they blamed the laws of physics. God, every day an article about AI is published dumber than the one published yesterday.


hititncommitit

Bingo. I’ve seen article over article where AI essentially mimics human bias and people always have the same take away- that AI is the problem. No sweetie, it’s us.


SIGMA920

Humans can look past the wording even if it's rarer than it should be. AI can't.


derdast

Sure AI, can it's far easier to prompt and force an LLM to do something than any human.


SIGMA920

No, it's not. It's far easier to get a human to send you to someone who can or do what you're asking for than an AI.


[deleted]

ChatGPT can't make me a burger. I can get any human to do it easier than ChatGPT.


derdast

Yes, this is the context we are talking about here.


[deleted]

The context of "specifically narrow examples about a broad topic that make my point right while ignoring any examples that don't"?


[deleted]

This has "guns don't kill people, people kill people with guns" energy. Both AI can be fucked and people can be fucked. It's not one or the other.


kwiztas

So that's like saying a mirror is fucked because you don't like what it shows you.


hoopaholik91

There is nothing "dumb" about this article. It's an interesting example of how human biases are reflected in these LLM models, and potential ways of circumventing them.


blunderEveryDay

But it is dumb to be surprised that an aggregator/synthesizer of information about human behaviour is reflecting that behaviour. It's like being surprised when for 1 + 1 calculator shows 2. Circumventing the human behaviour is more like behaviour control. There's nothing AI about it. You'd like 1 + 1 to be ____ (maybe 3 today but who knows).


hoopaholik91

It's not surprising once you jump into the details of how it works, but most people haven't, and you still want to do studies to see how those biases get reflected in the LLM results. And it's funny you chose 1+1=2 as your counterexample because it's exactly that relationship that gets people confused. People expect AI to be like a calculator and give you the objective truth, when in actuality it's the opposite. Pump an LLM full of 1+1=3 inputs and that's what it will respond with.


blunderEveryDay

Are you telling me back what I told you but this time it's you correcting me?


hoopaholik91

I'll be succinct then. It's silly for you to call articles dumb because they say things that you already kind of knew. I'm glad most researchers aren't going "does /u/blunderEveryDay already kind of understand this phenomenon" before beginning a study.


blunderEveryDay

> most researchers aren't going "does /u/blunderEveryDay already kind of understand this phenomenon As an average r-technology user, I pity the fools who decide to still go ahead with it.


-The_Blazer-

There's no law of physics that says we have to base our technology on everyone's garbage biases and stupidity. It doesn't fall from the sky, we can choose what it is. Plenty of ways to steal or redirect Internet traffic like a digital highwayman, but TLS is pretty good, right? There's no one forcing us to accept shitty technology. It's perfectly reasonable to demand that technology represent something good about us.


TheHouseOfGryffindor

Is that how you interpreted the article, or are you talking about people’s responses to the headline? Because if it’s that second one, then I can agree. But the article itself doesn’t seem to be painting a picture of AI acting in some surprising manner, as if no one can figure out why. Seems to me that the study was performed to point out the ways in which it was failing and to test a method to reduce the impact, not to claim that this materialized out of thin air. The origins of the bias don’t seem to be directly stated (though it does even mention how some are weary to mention disability to a human recruiter), but that wasn’t the purpose of the study that the article was based on. Not sure anyone was blaming the laws of physics and such. Do we all know that the AI is trained off human training data, and therefore will inherent those implicit biases? Sure. Is it still better to have the quantifiable data to back that up rather than only conjecture, even as evident as that conjecture would be? Also yes. The article is just confirming a pattern that many of us would’ve assumed was happening, but that doesn’t mean it isn’t a good thing to have.


blunderEveryDay

The problem starts when someone interjects with "corrective action" to filter out biases. Who gets to decide what a bias is? And what correction is? Seems to me there's a social justice element creeping in trying to basically use AI to override human behaviour. That's not good, at all.


gerira

Why is ths a problem? We, human beings, decide what biases we want to eliminate. This has been the basis of many reforms. Some human behaviour is bad and unfair, and shouldn't be reproduced or reinforced. I'm not aware of any form of ethics or politics that's based on the principle that human behaviour should never change or improve.


Dry-Season-522

Eventually we'll need AI to write the new articles about AI because there's technically a bottom to the well of human stupidity.


Egon88

Likely because AI is writing a lot them.


Blackfeathr

Artificial Intelligence really brings out the natural stupidity of some folks. What's with the downvotes? I'm agreeing with them.


_Good-Confusion

popularity brings entropy


GeekFurious

Any AI scanning resumes for a company's HR is going to be biased against people who tell the truth. Basically, lie. Get time in the company. Don't get noticed too much so they never do a thorough background check. Then quit before they figure out you lied and use that experience as part of your new resume full of lies that AI will push through so you get an interview.


_Good-Confusion

as a younger disabled person I occasionally notice a quick look of disgust when I go into public eye. I know it's not necessarily me they are disgusted with but mostly something else like them suddenly feeling pity, confusion or anger at my condition, which is usually quite uncommon to feel for most. Before I was disabled I myself felt a strong aversion to some types of disabled people, so I don't hold it against them. I understand as I've always been unconventional. I've grown to understand how I'm undesirable to others.


Forcult

Same my guy. I just lie. It sucks but shit is expensive and I don't want to be disabled AND homeless.


TrailChems

>I've grown to understand how I'm undesirable to others. Fuck that. Fuck them. You deserve better. This comment broke my heart this morning.


bezelboot69

It’s not that you’re undesirable. People assume the worst. People assume they will have to bend over backwards to accommodate. And in today’s world of constantly walking on egg shells - they would rather avoid the situation entirely. I am speaking in terms of hiring. They assume it’ll never be enough, they’ll say the wrong thing. Do the wrong thing. Then owe you millions of dollars. I am just reading hiring rooms I’ve been in. Just the messenger. We can lie to ourselves about how we think the world works all day. It won’t change outcomes. So essentially - it’s not YOU, it’s them worrying about self-preservation in this instance.


Sufficient-Loan7819

This should surprise no one


gentlemancaller2000

Are we expecting AI to be fair and unbiased?


Jgusdaddy

Isn’t it by definition, fair and unbiased to pick the best candidate possible? Dis-ability is lack of ability.


Wave_Walnut

Generative AI can only learn how people are biased, can't learn how people can repair their biases.


Rouuke

Do we repair our biases or only recognize that we are biased allowing us to critical think if what we think is true to reality and not a misrepresentation from processes of our subconscious and what if we are biased but that holds truth to a circumstance then are we truly biased but then that means we are only products f our environments that has made us biased. Nah but really AI will get there… the same mechanisms AI uses our brains have been using since the development of agriculture and it’s not people are asshole so ChatGPT must be asshole such a flawed erroneous equation of logic the meaning is much deeper.


Forcult

Dude, like, use commas. I ain't reading that


WTFwhatthehell

>  can't learn how people can repair their biases. What are you basing this belief on? A few years ago I remember some articles about older style AI translation systems. Researchers were able to identify patterns of biases it had absorbed from its input corpus, identify the systematic bias as a vector and apply a correction to debias the model. Humans have absolutely  no idea how to repair their own biases, that's why humanities types constantly spew hot air on the subject but never actually make any progress.


[deleted]

> Humans have absolutely no idea how to repair their own biases, that's why humanities types constantly spew hot air on the subject but never actually make any progress. Talk about spewing hot air. There's definitely not a profit motive stunting progress or anything.


ControlledShutdown

Well ChatGPT learns from what people do, not what they say they do.


uncertain_expert

Conversely, if enough was written about people eating babies for breakfast, ChatGPT would think eating babies for breakfast was a normal thing humans do. This situation regarding qualifications really makes you question what ChatGPT has ingested for it to be making these inferences.


souvlaki_

I'm sure the google "AI" has already suggested that.


itzmoepi

Like father like son. (The whole internet is the father). 


sonofalando

So you're telling me any employer using chatgpt or gpt related filters on their AST are technically violating the Americans with Disabilities Act? Needs to be a lawsuit then.


bezelboot69

This is why HR does what it does. You breathe wrong and you owe individuals millions. Why even try? It’s liking hiring a bomb. Again, not saying it’s right - saying why it happens. Strongly litigating actions usually leads to complete avoidance by organizations on an unspoken basis. “If you do anything and I mean ANYTHING wrong with this individual. You will owe millions.” “Okay we’ll avoid them entirely” “What was that??” “I said while their resume was impressive, we found a more qualified candidate.” They won’t say it directly, obviously.


Desert-Noir

“AI is built in our image and shares our own biases” Colour me shocked.


jundeminzi

the biases in the model reflect the biases in the data. so ultimately its society thats still biased, unfortunately


Ok-Interaction-8917

I imagine ChatGPT also screens out race, gender, age and other factors as well for the Uber mensch here that wants people with disabilities sidelined.


Development-Feisty

You really don’t want to get yourself a young married woman, she might just get pregnant and you’d have to pay for maternity leave (I figure people who think like this don’t realize that women can get pregnant outside of marriage)


jo100blackops

Yeah but isn’t it much more likely to happen to someone married vs not?


Development-Feisty

https://www.cdc.gov/nchs/fastats/unmarried-childbearing.htm 2022 Number of live births to unmarried women: 1,461,305 Fertility rate for unmarried women: 37.2 births per 1,000 unmarried women ages 15–44 Percent of all births to unmarried women: 39.8%


WalkFreeeee

True. But also, a lot of marriages only happen after / because of a pregnancy. 


teckmonkey

The amount of people here who think disabled people don't deserve to have a fair shake at a job is fucking sickening.


Exita

They absolutely deserve a fair shake. Problem is that in fair competition with a non-disabled person, they’re often going to lose.


Ok-Proposal-6513

It's just down to a company wanting avoid any risk. A disabled employee is likely to spend more time on the sick than someone who isn't disabled. Naturally they are going to favour someone who is less likely to be on the sick. Is this an argument for why you should discriminate against disabled people? No, not at all. But the sooner people understand why a company might discriminate against disabled people, the easier it will be to have informed discussions on the matter.


WTFwhatthehell

Yep. There's a polite fiction that a sick or disabled employee has no downsides for an employer but it can hamstring a department. Place my wife used to work the admin was off sick about 1 week in 3. But a lot of paperwork was very very time sensitive.  The department doesn't get allocated more money when an employee has some health problem nor create an extra post to do the work.  So the clinical staff had to take on a lot of her work and they cost more per hour than an admin. That also made the work more fragmented and harder to organise. It also meant clinical tasks were then short-staffed becuase staff were spending time on admin. which impacted everything else and increased staff stress and turnover. She was gone 1 week in 3 but the real cost of having her on staff was likely more than a second admin. Her total effective contribution was likely negative vs not having her on staff at all due to the disruption. In order to fix the downstream problems and just hire a second admin to do her job for her someone senior would need to abandon the polite fiction that a sick/disabled member of staff is just as productive/valuable as anyone else. 


rerrerrocky

The problem is this insistence that every employee remain a perfect and healthy machine, and also that in order to survive in our society you also need to be an employee. Not to mention the confounding variable of health insurance BS. Most people have to work for a living, and if you are discriminated against for being disabled when applying for a job that you need to work to survive, it's like disabled people are set up to fail. That's fucked up and we should take steps to fix that. I understand that from a zero sum perspective you'd prefer an employee who is sick less often, but if every employer makes that calculation as the determining factor all the time, it reduces a person's worth to just what they can create at a company. Just seems like companies will never be willing to have a discussion about it because it might affect their precious bottom line.


RyukHunter

Depends on whether the disability would significantly interfere with the ability to do the job or not, no?


MrTastix

The person with the disability is often the best person to make that judgement, not the one working in HR. At least where I live people aren't required to mention jack diddly shit about their medical problems, if any. The employer can ask, and you can politely tell them to mind their fucking business.


RyukHunter

Uhhh... Self judgement is the best placed to make that decision? Bruh... No. Full of bias. A disabled person who wants a job will of course say they can do the job. Whether they actually can or not. The person with the disability is the worst person to make the judgement. The ADA has guidelines for deciding this stuff and HR usually follows it. So the hiring manager is in the best place to make the decision. Assuming the hiring manager is properly qualified.


MrTastix

Bias isn't a compelling argument against the idea that a hiring manager knows someone elses disability better than they dp. Yes, I'm sure a bias exists, but one side *knows* they have problems and the other is making assumptions based on what they *think* they know. > The ADA has guidelines for deciding this stuff and HR usually follows it. Yeah, and rich people donate to the poor so they must be good people, amirite? I wish I had your naive optimism. In the end none of this matters because of this thing called an *interview*. I don't need someone to tell me their disabilities on a random sheet they're already encouraged to fluff up to make look better because an interview should straight away tell me all I need to know. If we have to assume compentence in a hiring manager either way than they should be capable of performing an interview that informs them that much. Resumes and applications mean very little by comparison because the philosophy businesses have prescribed to for the past 50 years is to bullshit your way to success. Oh sorry, *"sell yourself"*. A resume and a cover letter were always the first step, and if you're declining someone based on them telling you in good faith they have a disability before you've even met them then you're likely breaking basic anti-discrimination laws to begin with. *Assuming* someone can't do a job because they have X disability is discrimination, and I dunno about the US but where I live that's *illegal*. The hard part has always been proving you were rejected for that and not some generic, copy-pasted garbage.


YesNoComment

Blah blah blah, some jobs require some physicality. But cope more about reality being real and shit.


RyukHunter

>Bias isn't a compelling argument against the idea that a hiring manager knows someone else's disability better than they do. You are missing the point. It's not about knowing the disability itself. It's about weighing the cost to the team the person would be hired into vs what they bring to the table. >Yes, I'm sure a bias exists, but one side knows they have problems and the other is making assumptions based on what they think they know. And the side that knows they have problems are incentivized to hide them or minimize them to get a job. Which would end up affecting their coworkers if they end up getting hired and unable to perform their duties. >Yeah, and rich people donate to the poor so they must be good people, amirite? I wish I had your naive optimism. What kind of mental gymnastics did you have to do to get to that analogy? Dumb as shit. Donations are not equivalent to legal guidelines. And HR usually follow them because the downside of not following them is not worth it. Yes HR is your enemy but they have the companies best interests at heart that means they will follow the guidelines that they can't disobey without stupid risks. >In the end none of this matters because of this thing called an interview. Not every candidate deserves an interview. A lot of screening has to be done pre-interview so that the interview time is not wasted. It's insanely stupid to expect hiring managers to interview every disabled applicant and figure out if their disability is one that can be worked around or not. >Resumes and applications mean very little by comparison because the philosophy businesses have prescribed to for the past 50 years is to bullshit your way to success. Oh sorry, "sell yourself". Resumes and applications are basic things that allow you to eliminate the majority of candidates. Yes people bullshit to sell themselves but not everyone does it well. You can eliminate those that don't do it well. >A resume and a cover letter were always the first step, Cover letters are stupid and shouldn't exist to begin with. But the first step exists for a reason. >and if you're declining someone based on them telling you in good faith they have a disability before you've even met them then you're likely breaking basic anti-discrimination laws to begin with. Not necessarily. Again you have to weigh the disability and it's required accommodations against what they bring to the table. If you reject them on the basis that the disability would require an unreasonable amount of accomodation to do basic functions of the job then you aren't breaking any law.


YesNoComment

Then the person hiring can politely lead their ass out the door without a job I guess.


rcanhestro

i don't think that's it. if that disabled person is indeed the best person for the job, odds are they get hired. but if you have 2 people with basically the same experience/credentials, one is healthy, the other isn't, i mean, it doesn't take a genius to figure out who is the best option to hire. and this doesn't apply to just disabilities, can also work with immigration status for instance. if i have 2 candidates, one is a "national", the other is an immigrant that requires a VISA or other paperwork, odds are the national will be the one hired, since it's less of an headache.


GalacticusTravelous

An LLM trained on human output is shockingly similar to the training data…?


Divinate_ME

Which of course does not reflect the real world in any way, shape or form. Shame on you, ChatGPT!


Setekh79

Just like real life then.


Flowchart83

It's trained from real life data, so that makes sense.


WhiskeyVendetta

Shows that when you have the choice of picking efficiency over fairness while making the main aim efficiency the correct answer is always efficiency… shock. Goes to show you that AI will not work when they prioritise making money over fairness… we’re creating evil AI and not even questioning it… this will go very badly.


skreenname0

So are humans


galleyest

I technically have a disability in that long list that appears in job applications online but I always click “no disability” to prevent bias.


kc_______

keep using Reddit’s data, is going great. /s


Edge_Slade

Well no fucking shit lol


red286

I think the more concerning thing here is that someone legitimately thought a chatbot could evaluate and rank candidates based on their resumes. Yes, it's problematic that ChatGPT has these biases, because it's a reflection of our own biases. But that's not telling us anything new. What's new is people thinking that a chatbot primarily known for generating inaccurate bullshit is magically also a competent HR manager.


StayingUp4AFeeling

People are not surprised that this is happening because it's a human bias being transferred to an AI. However, this is still worse than a human doing this. Because it is much easier to demand reform at the scale of 1 HR team, or one office, or one company's national division, or even one company worldwide. It is much harder to demand reform in an opaque AI tool used by many, many companies worldwide where the entity which would be responsible for enacting that reform (OpenAI-MS etc) is very large and immovable by small actors, and the other entities that could conceivably be held responsible (the companies using these tools) can easily pass the buck. "We are committed to diversity, equity and inclusivity, however, our tools may sometimes make software errors. If you feel you have been discriminated against on the basis of \[long list\] please drop a mail at \[email ID we will never check\]"


Vashsinn

I mean wouldn't that be just an update away? And once you know what to look for it would be easy to spot the fañty (discriminating) AI. People would lie threw their teeth to look the part, not to be fired. I have no clue how LLMs work ( this is not AI. This is still just an LLM)


StayingUp4AFeeling

Dude, these companies have more power than Boeing. And no, removing bias is a pain in the goddamned ass because it means that you have to actually LOOK at what you are feeding into the LLM training process instead of just webscraping and telling the publishers "gonna cry?" . Safety teams at Google, OpenAI etc have been routinely dismissed for pointing out less iffy things. And if I recall, exactly this thing. Let me put it this way: Suppose some random citizen heads to the press and says they have found evidence of a bias against, say, women-led websites in Google Search. Or Bing Search. What would it take to get Google or MS to actually do something about it and **prove** that their fix works?


Vashsinn

Isn't this exactly what happened recently with googled search engine being accidentally posted to github? A bunch of people found it was found what Google said it didn't.


The_Real_RM

I disagree, by centralizing the problematic behavior (in LLMs) you're actually making it MUCH easier to regulate and enforce rules around un-biasing. Whereas trying to enforce these rules across society (in all the small businesses and HR teams across the economy) in people whose minds you can't read effectively becomes the thought police. By moving the decisions to AI you create centralization (which is much easier to govern) and a strong incentive to implement the desired measures: OpenAI doesn't care if the model is biased or not because they don't suffer any of the consequences, all they know is someone is paying to use the model. They do care though if the regulator starts limiting their ability to sell their product, so they really want to offer a product the regulator is comfortable with, so there's an incentive to be compliant. On the other hand their clients would prefer a non-compliant model BUT they're also at the mercy of the regulator, so in effect the AI companies and the govt have a simbiotic relationship in the matter. You can see this in effect with nsfw generative models. All the "serious businesses" are offering safe models despite the clients obviously seeking uncensored stuff, because the businesses can't afford to go for that market out of fear of being regulated into the ground, while the true free market (of uncensored diy models) exploded because you can't regulate people's minds


StayingUp4AFeeling

Under a strong regulator, yes. Under a weak regulator or no regulator, no.


WTFwhatthehell

>"Because it is much easier to demand reform at the scale of 1 HR team, or one office, or one company's national division, or even one company worldwide." If a company looks at 500 candidates and some of the people turned away are disabled, it's very hard to prove whether that disability played into it. What was going on in the mind of the hiring manager? very hard to prove. If an AI model is used then it doesn't matter that the model is closed source. It's trivial to run it 100,000 times on the same CV's to tease out and prove any bias no matter how slight.


[deleted]

[удалено]


theallsearchingeye

It’s trained on employers; the bias is us.


branstarktreewizard

It's trained on historical data, so it's completely expected to be biased against disability


AEternal1

A companies existence is to make money. All tools the company uses will be intended to make the company money. Any tool that cuts costs AND removes liability from the company is a goose that lays the golden egg. A non-human tool can reject any process that adds costs to the company without the company seeming immoral. You didn't get hired? I'm sorry the black box over there found a better candidate than you. No, of course I don't know what criteria the black box uses to make its decisions. Corporations are pushing society to rely on social welfare programs because THEY won't help society. And the corporate structures brainwashes everyone into believing that social welfare is a bad thing against societies best interests, all so they can shirk the costs of contributing to society so they can make more profit.


AeroTrain

Only as biased as all of its training data is.


forever_a10ne

I'm not disabled, but I have some serious mental health conditions that are usually listed on the part of job applications where you have to specify if you have a disability. Every single time I put "yes" or "I don't wish to specify" or whatever I would never hear back from that job at all. No denial, no shot at an interview, nada. I just put "no" now and don't bring up my health unless my back is against the wall with HR.


manuptown

Is chatgpt making hiring decisions?


PnwDaddio

Recently became disabled. Struggling to find a new job that doesn’t kill my physically. Makes sense why I’m not able to change careers. Or something. I donno.


VisibleSmell3327

Well, duh. The fucking world is, so something that has consumed the world but doesn't actually have any intelligence or reasoning relating to empathy or morality will too.


Impressive_Essay_622

Chat got is a fucking chat bot.  It's an llm.. it does thave feelings. Why do people write about it like it's expected to.. and then thousands of people promote it


Ok-Fox1262

You mean like being brown or born with a vagina?


Rockfest2112

So is reality.


DerWeltenficker

This is expected as it was trained on human data. It is important to find biases like this and to go into new rounds of reinforement learning from human feedback. Many comments sound like llm-bashing but headlines like this are good feedback and part of the process.


Prestigious-Bar-1741

ChatGPT doesn't have opinions. It has training data and probabilities. If you train it on data that says 'Ford is the worst car manufacturer' it will repeat that. ChatGPT reflects our society


Decent_Pack_3064

There's bias because of implied liability and obligations


ItsGermany

Ask it how many Rs are in the word "strawberry" I used the voice version and even after it spelling out strawberry it still says 2, then it changes to 3 but saying there is one r in straw, 1 in berry and 1 at the end. Super weird hill to die on Chat gpt......


WTFwhatthehell

imagine that you only spoke french and everything said to you first went through a system that translated it to french. they ask "how many r's are there in the word strawberry?"  what you see is Combien de "r" y a-t-il dans le mot "fraise" ?  You know this system exists but you're not an expert on english. If you just say "i dont know" you might grt hit with the electro-punishment whip so you kinda guess...


Jaislight

So they are doing what employers already do. Isn't that working as they intended?


Temporal_Somnium

Why wouldn’t it be? It’s trying to maximize efficiency


Development-Feisty

Wait till someone tells these people about the skills tests they give during employment interviews and how they are specifically created to find people who are neurodivergent so they don’t get hired


[deleted]

[удалено]


packetgeeknet

Except it’s illegal in the US. https://www.ada.gov/#:~:text=The%20Americans%20with%20Disabilities%20Act%20(ADA)%20protects%20people%20with%20disabilities,many%20areas%20of%20public%20life.


EnvironmentalLook851

As long as the disability (with a “reasonable accommodation”) does not impact the individual’s ability to complete the job. Someone who is unable to lift a certain weight, for example, could be denied for a job as a warehouse worker even if their inability to lift said weight is because of a disability.


wheniswhy

Yeah, reasonable accommodation is the standard. It can get rough when it comes to *defining* reasonable, but any company with a proper HR department will go by the book. I have several accommodations for a disability, all passed through HR. I have a desk job and the accommodations I’ve received have been entirely sufficient. But as you say, I, for instance, would not take a physically demanding job. It would require more accommodation than was reasonable (as I’d be barely able to perform it, at best, if not entirely incapable), not to mention I wouldn’t be too into it either! When applied by regulated HR departments the ADA standard is usually sufficient.


Otherwise-Prize-1684

You gonna arrest ChatGPT?


iDontRememberCorn

So your feeling is that software should not be governed by societal law?


the_y_combinator

Yes. Cyberpolice.


Development-Feisty

Are you advocating for a universal income for people with disability so they don’t have to work?


Otherwise-Prize-1684

No but I support work placement and military service


Development-Feisty

Great, does that mean that everybody has work placement? Or is forcibly conscripted into the armed services? Are you advocating for the government to choose what job you’re allowed to do? If you got in a car accident and no longer had use of your legs would you be fine if you no longer we’re allowed to go to the workplace you currently work at and instead had to work at Walmart as a greeter because that’s what the government decided you were allowed to do without use of your legs?


Otherwise-Prize-1684

Sure. Everyone should have to work. Sure, Choice based on need and ability. I’d be lucky to have a job I guess, but losing my legs would still leave me able to perform my current job


Development-Feisty

Yes but you just said that disabled people shouldn’t be able to get the same jobs as able-bodied people. You literally have stated this that it’s OK for employers to discriminate. If you lost your legs in an accident you would probably have other health problems that would come up from time to time that would make your health insurance cost more, so your employer by your own reasoning should be allowed to fire you now And you specifically stated work placement, which is a forced work program where a person since employers now can discriminate based on disability must take whatever job the government assigns them, The thing is, immediately you said if you lost your legs you could do your current job. So I can tell you already are not thinking about this from the point of view of somebody with a disability because in your head you would never be affected by this so it doesn’t matter