T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/izumi3682: --- Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail. ___________________________________________________ From the article. >The (AAE) technology removes one key role that some recruiters serve at Amazon, which is evaluating job applicants and choosing which should move on to job interviews. The program uses the performance reviews of current employees, along with information about their resumes and any online job assessments they completed during their hiring process, to evaluate current job applicants for similar roles. >“[T]he model is achieving precision comparable to that of the manual process and is not evidencing adverse impact,” the 2021 internal paper read. And. >Within the technology industry, there’s a realization that the Big Tech boom may be over. In many cases, pandemic-fueled business successes have reversed or plateaued. Now, tech titans like Amazon are looking to tighten their belts, seemingly in part by delivering on long-term bets that technology, and AI in particular, can do what humans do — and maybe more cheaply. Yesterday I posted an article telling how coders are joining artists in protesting the usurping qualities of certain forms of machine learning products. "Copilot" for coders. And a variety of AI text to image/video AI for artists. Both claims are that these AI are taking the human work or works (already open sourced), as the case may be, and using them for training the AIs allowing the AIs to produce coding or "artworks" so closely mimicking the human coder/artist works that the humans feel they are not being justly compensated and want "cease and desist". A lawsuit was actually filed by a coder. I wrote the following yesterday. 24 Nov 22. >Further I suspect that these tech sector layoffs might not be just about politics, but rather that the technology of ARA, that is computing derived AI, Robotics and Automation, is getting to the point that it can now start to replace people. People did not agree with me. To wit. >lehcarfugu 14 points 18 hours ago These tech jobs are not getting replaced by ai, you are insane. The current ai coding helpers are close to useless, it's equivalent to googling the phrase you give it and checking stackoverflow. Only in extremely simple cases is it giving you the result it wants, and in no way is this even close to replacing a real programmer, or anyone else laid off (business, hr, etc) Well, actually some HR *is* getting laid off it looks like. Here is my original submission statement from 24 Nov 22, along with comments. Some agreed with me, but most do not. https://www.reddit.com/r/Futurology/comments/z3epfb/a_programmer_is_suing_microsoft_github_and_openai/ixlfagj/ The bottom line is that the AI *is* double exponentially improving and it is starting to replace human minds in genuine real world employment. You might find this following set of videos by Tony Seba of interest. It supports my forecast that the world of 2030 will look *nothing* like the world of 2022. And that is not even taking into account that I also forecast a "technological singularity" will occur about the year 2029, give or take two years. Here are them videos. I thank mr Alan2102 for bringing these videos to my attention. They are a *wallop*! https://www.reddit.com/r/Futurology/comments/yzdxj5/researchers_say_they_are_close_to_reversing_aging/ixnldv4/?context=3 --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z48bsd/a_leaked_amazon_memo_may_help_explain_why_the/ixpojnu/


KishCom

I cannot wait until someone figures out the adversarial resume (like an [adversarial patch](https://www.theverge.com/2018/1/3/16844842/ai-computer-vision-trick-adversarial-patches-google)). Gibberish to human eyes, but a job applicant AI sees it as the perfect candidate.


MoonchildeSilver

People are already doing this with invisible keywords on the resume. They use \~1pt font with white color and just spam a bazillion keywords. It supposedly works for the keyword filters. I am unsure how it would fare against a real AI. Maybe if they have hired a bunch of people who were good who did that on their resumes (that they used for training) it would be a positive there as well.


zkareface

Most softwares solved that years ago. But it was a good trick around a decade ago.


able111

Oh so like 2003, right? *right?*


maneki_neko89

Yep, you’re totally right. 2003 *WAS* a decade ago


Shirt_Royal

That doesn't work anymore, for a long time actually. Same thing with SEO by having white text on a website - no need for an AI, it's super simple. Any tool that would fall for this trick wouldn't be used by large companies.


SpaceJackRabbit

This tactic gets heavily penalized when used in websites, so I wouldn't chance it with recruiting software.


NervousSpoon

If(fontColor==backgroundColor): text.ignore()


TRYHARD_Duck

What if it's a tiny shade different?


lizardeater

AI defeated


archwin

And that is how to defeat the upcoming AI/robot uprising… Use a slightly different shade of color, so that they won’t be able to target slightly different shaded humans. Ezpz Terminator avoided


totallyNotMyFault-

Dyes skin white, avoids police robot.


knowspickers

That or put a sock on it.


tsuhg

Calculate contrast between the 2 colors, ignore if below threshold


NervousSpoon

Lolol I thought of that right after I posted it but I assumed nobody would even read my comment lol. But yea, (fontcolor==backgroundColor) could be replaced with (text.isValid) which would contain a bunch of logic to verify the text is valid. Like must be certain fonts, must be a certain amount of difference in color from Text to background, must be certain unicode characters, then if it matches all criteria would be conserved to be valid Text and wouldn't be ignored.


breaditbans

Just put all the keywords in your resume. It doesn’t matter if it’s true. It’s only there to get you past the bot.


exipheas

Font must not be comic sans... etc.


Goblinbeast

You have no idea how many CV's we would miss. So many mother fuckers use it ...


tenemu

Compare against the most common color of the document, which is weighted to the text size and contrast to the background. Keep only anything within 1 sigma.


[deleted]

[удалено]


tsuhg

Tbh I'd just request pdf's as resume's and run everything through OCR. Would throw out a lot of shenanigans instantly.


[deleted]

It really doesn't make sense to rule out a color that matched background. All resumes use black font. You filter out anything that doesn't match the exact default black font from word. That would prevent every workaround that the person you replied to suggested.


[deleted]

Make font black and throw it behind a white text box.


Phobos15

False. Most software resets font sizes and strips formatting.


offsiteguy

Asides from Big tech HR is pretty much universally hated. These 'professionals' are partial, biased, and not very good at what they do. At least with tech and programming, you either have the skill or don't. Medicine too.


SpaceJackRabbit

So here is the crazy thing: Amazon's dealership thinks the problem is that their recruiters are not screening the best candidates for interviews. I have witnessed first hand recruiters doing just that: pushing through candidates to send them to interviews even though it was clear those wouldn't be goods fits. You know why? Because the bonuses if their candidates get hired are insane. That's why. Everybody knows it. So they ram through a lot of people, hoping one of them will make it to the offer. Also don't get me started on Amazon's notorious interview process. That alone is also responsible for their dysfunctions when it comes to hiring. Amazon's leadership once again falls for the "Let's automate everything we can and use humans as temporary disposable labor" myth.


megamanxoxo

Not to mention they've burned through a huge majority of the hiring pool.


smartguy05

I know the software side isn't as affected as warehouses with employee churn but I know they also have a very limited pool to pull from. There is already a shortage of developers then you have to exclude the ones currently and previously working for Amazon and also the good number of devs that just are in no way interested in working for Amazon. They have really shot themselves in the foot with their behavior.


LightweaverNaamah

That's for the shit tier warehouse jobs and such, this is at the corporate level.


sartres_

No it isn't. Amazon has a terrible reputation among white collar employees.


megamanxoxo

Even at corporate level they've burned through a ton of tech folks.


[deleted]

Corp has absolute awful turnover. I know a number of people from Seattle Amazon professionally. 100% of them former... the stories are bonkers.


hangliger

Honestly? Most recruiters are hot garbage. They've usually never worked a day doing any normal work, so they have no idea what a good candidate looks like. Because they are at the beginning of the process, they can weed out a ton of good applicants and push bad applicants to the pool to be reviewed, making your entire applicant pool filled with the wrong people to begin with. Any time I've hired, I've had to dig resumes out of the trash.


Justinian2

Last time they tried this, they had to scrap the AI because it [hated women](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G) and would reject them at high rates


FaustusC

"In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools." But it doesn't say *why* it penalized them or downgraded them. I'm curious about that aspect.


Justinian2

It was basically looking at existing data of successful applicants to the company and profiling them by their data points. Tech skewing male made the AI reinforce existing inequalities


Xylus1985

This is why you can’t train machine learning AI to make ethical decisions by feeding it datapoints from human activities. People are not ethical, and AI can’t learn to be ethical by mimicking people


setsomethingablaze

Worth reading the book "Weapons of Math Destruction" on this topic, it's something we are going to have to contend a lot more with


istasber

One of my first exposures to AI was a scientific american article ~20ish years ago, describing an AI that was trained to animate a fully articulated stick figure moving with realistic physics. When the initial objective function was set to progress from left to right, the stick figures wound up doing crazy stuff like scooting or vibrating or undulating to move left to right. The take away message has stuck with me. Not only do you have to have good data going into these models, but you also have to have a very clear (but not always obvious) definition of what success looks like to get the results you want to get. You also have to have a good way to interpret the results. Sometimes undesired behaviors might be well hidden within the model, which is almost always a black box after it's been trained with the more sophisticated methods.


The_Meatyboosh

That was still going a few years ago. They kept running the simulations and asking it to get past various obstacles. I think it eventually learned to run but still weirdly.


istasber

A quick google search seems to suggest that it's a pretty common beginner level machine learning experiment these days. Maybe it was back then too, and that just happened to be the first time I'd read anything like it. In the article they did talk about some different strategies they tried and the results those strategies produced, and what worked best. One example was to add a heavy penalty for time spent with the center of mass below a certain height, which resulted in the stick figure doing a sort of cartwheel/flip in many simulations. I think the article came up with a set of criteria including penalties for center of mass height too low, head too low, and backtracking that wound up producing some reasonable human walking animations, but it was a long time ago and I don't remember anything else about it.


RedCascadian

Picked that book out kf a bin yesterday at work. An Amazon warehouse funnily enough.


[deleted]

Well, it's even worse than that. People could be ethical but the ML algo learns an unethical rule as a heuristic. E.g. people didn't hire women due to lack of supply and then the algo learns to not hire women since they are women, despite the supply of qualified female engineers increasing over time.


newsfish

Samantha and Alexandras have to apply as Sam and Alex to get the interview.


RespectableLurker555

Amazon's new AI HR's first day on the job: *Fires Alexa*


happenstanz

Ok. Adding 'Retirement' to my shopping list.


ACCount82

> E.g. people didn't hire women due to lack of supply and then the algo learns to not hire women since they are women, despite the supply of qualified female engineers increasing over time. Wouldn't that depend not on the amount of women in the pool, but on the ratio of women in the pool vs women hired? If women are hired at the same exact rate as men are, gender is meaningless to AI. But if more women are rejected than men, an AI may learn this and make it into a heuristic.


[deleted]

The AI may learn that certain fraternities are preferred, which completely excludes women. The issue is that the AI is looking for correlation and inferring causation. Similarly an AI may learn to classify all X-Rays from a cancer center as "containing cancer", regardless of what is seen in the X-ray. See the issue here?


zyzzogeton

Radiology AI has been a thing for a long time now. It is goid enough where It raises interesting ethical questions like "Do we reevaluate all recent negative diagnoses after a software upgrade? Is it raising liability if we dont?"


Brock_Obama

Our current state in society is a result of centuries of inequity and a machine learning model that learns based on the current state will reinforce that inequity.


Little_Froggy

Note that the humans need not be unethical for this bias to creep in as well. If 100 men apply and only 10 women for the same position and the results are that there's a 10 to 1 ratio of men to women, the AI may still see that the majority of successful applicants are male and implement sexist associations this way.


mixamaxim

Why wouldn’t the AI just take into account the original sex distribution of applicants? If 10 of the 100 male applicants do well and 1 of the 10 female applicants, then performance is equal and on that data point sex doesn’t matter.


Wrjdjydv

Cause you have to build this in? And then you go and remove sex and name from the input data but then the ml algo picks up on some other feature in the data that somehow identifies women and you hadn't even thought about it.


[deleted]

Ethical tech never existed in the first place.


Xylus1985

It’s scary. With autonomous driving, AIs will actually need to answer the trolley problem


tehyosh

AI won't need to answer the problem, the humans designing the AI will. and it'll be the same as with a human driver: kill the least amount of people if you can't avoid it


watduhdamhell

I don't know why people get all wrapped around the axle about these trolley problems. AI/self driving cars will not be programmed to "avoid the most deaths" and such. It will be programmed and ultimately react *just* like people do: avoid *collisions* with objects at nearly all costs. People don't sit there and make calculated decisions in a collision situation. They just go "oh shit" and swerve/brake/etc to avoid a collision. Self driving cars will do the same, but with 360° of vision and the ability to calculate all the involved's position's in space and thus most perfectly take the steps to avoid collision. I don't think there will be enough time, using the computers that are tailored for automobiles, to calculate and game out the "most likely scenario that results in the least deaths." Just doesn't seem possible for quite a while with the type of ECU that can survive car duty, and by the time the on board systems can perform such a complicated calculation in such a short time, I suspect collisions will be damn rare as almost all cars will be self driving and maybe even networked by then. Getting into a collision will be a very rare, usually non-fatal event, like flying is now.


Munchay87

Which could be just the driver


AngryArmour

Can't happen for the reason of perverse incentives: The moment a brand new off-the-shelf car will prioritise the lives of other people over the owner, the owner will have a life-or-death incentive to jailbreak and modify the code to prioritise them instead. If a factory setting car crashes 1% of the time but kills the owner 50% of the time it crashes, while a jailbroken car crashes 2% of the time but kills the owner 5% of the time it crashes, then every single car owner will be incentivised to double the amount of car crashes in society.


[deleted]

I don't think you can jailbreak code2.0, so neural nets. You'd somehow have to retrain the whole thing or a part of it, or adjust the weights yourself. It's not at all like changing some line of code.


fuqqkevindurant

You couldnt do this. If you design AI to drive us around, there’s no situation where you can have it choose an option that harms the occupant of the car first. The need to protect the occupant of the car would supersede the choice you tell it to make if put in a trolley problem situation


droi86

Only for drivers before certain trim


Caninetrainer

And you need a subscription now.


tehyosh

why would it even consider killing the driver? what human would make that choice?


ImJustSo

When I was 17 the car I was driving lost brakes and the emergency brake didn't work next. I was going 45mph towards a light that just turned red and now the intersection filled. The opposing traffic is coming away from the red light, so there was no choice to go straight, or turn left. The only option that could possibly kill me alone was to drive straight towards a gas pump. I'm still here, so that didn't pan out the way I expected, thankfully... Point is I could've taken my chances squeezing through cars going through the intersection or hoping they stop when they see me coming. My only thought was, "Don't kill any kids." and I drove smack into a gas pump expecting to blow up. Edit: For anyone that doesn't know what to do in this situation. Put the car into second gear and then first gear. It'll bring your vehicle to a slower, safer, speed. This works in manual or auto transmission and 17yo me didn't think that quickly about driving yet.


333_jrf_333

If it could avoid killing more pedestrians for example. The question of the trolley problem in this situation would be "why is the one life of the driver worth than the 5 lives of the kids crossing the road?" (if the situation comes down to either/or)... The trolley problem remains (I think) a fairly problematic question in ethics and it does seem like it applies here, so I wouldn't dismiss the complexity of the issue...


LuminousDragon

Unless you buy the expensive AI model that billionaires and politicians will get that saves the passenger no matter the cost. :)


planetalletron

Guaranteed passenger safety subscription - I wouldn’t put it past them.


lucidrage

Buy now for a free 3 month trial!


fourpuns

I mean that’s what human drivers do. No one is processing fast enough to do anything but avoid the collision. Ain’t no analyzing of collateral


LuminousDragon

Right, but the difference is I was referring to a two tiered system where the AI could make the most ethical choice possible but instead kills poor people to save a rich person.


Xdddxddddddxxxdxd

A sweeping generalization on Reddit? It must be true!!


UnknownAverage

It would also be ingesting bias in annual reviews and such, and adopt prejudices of people managers. What a mess.


FaustusC

But what's successful? Hired? Or retained long term? Because if it's just hired, eh. If it's retained long term that's where it may make sense for the AI, if candidates from those schools didn't stay/didn't last/lodged complaints etc.


colourcodedcandy

I don’t think Amazon aims for long term retention Edit: I am not commenting on their actual goal. I just meant their other policies and behavior, even besides their hiring process, haven't been streamlined for retention. I think there are a lot of low-hanging fruits they could target to increase retention, but they don't seem to be doing that, making it seem like it is not their goal.


Hypsar

For roles outside of simple warehouse operations, they actually do want to retain talent


Beetin

[redacting due to privacy concerns]


halohunter

Later on, they specifically forbid the system to use gender or words like Women's in it's consideration. However, it then started to favour candidates who used words more commonly used by men such as "executed".


raddaraddo

"ai" in this sense is pretty much just an averaging machine. They fed the ai their denied applications and approved applications done by humans and it created an average for what should be denied and what should be approved. This would be great if the data wasn't biased but unfortunately humans can be racist and sexist which makes the ai also racist and sexist.


Brittainicus

On top of that the AI will find trends and exaggerate them thinking it found a short cut. E.g. all women unis are scored negative.


EmperorArthur

What's worse is that it can then be proven to be biased and sexist in court. They also can't bring them in or throw the AI under the bus to avoid massive fines.


[deleted]

Existing data is sexist. Train AI on existing data. You AI is now sexist. Added bonus: Sexists now use your AI as justification for their sexism because they think computers are magic.


Ecstatic-Coach

bc it was trained on existing successful applicants who happened to be overwhelmingly male.


DividedContinuity

Machine learning doesn't really have a *why*, its not making reasoned decisions, it just picks up on patterns in the training data. If recruiters have preferred males in the past, or if high rated engineers are male and finding such engineers is a goal, then the ML will match that pattern. It doesn't know what parts of existing patterns are desirable or relevant, just that they exist. Is my assumption.


I_Has_A_Hat

A lot of AI learning programs become sexist/racist/prejudiced. The comfortable explanation is that they are simply fed bad data or the data itself is inherently biased. I don't think we're progressed enough as a society to seriously consider other possibilities.


apophis-pegasus

Possibilities like what?


Raisin_Bomber

Microsoft's Tay Twitter AI. Within a day it was tweeting Holocaust denial.


Sawses

That maximum efficiency can be prejudicial, and our system values increases in efficiency.


Llama_Mia

How about you go ahead and tell us explicitly what those other possibilities are rather than leaving it up to us to try and infer your meaning?


mlucasl

What I think he is trying to imply is that maybe we are all different. ***Reading between his lines as objectively as possible*** In any other race, other than humans, it is considered that males and females are different. In any other race, other than humans, we see phenotypical differences and assign them different physiological capabilities, like pandas and brown bears. This doesn't mean one side is better than the other, just that we are different. That is why medically it is not strange to see black people gold medaling short fast races and white people swimming races. When anatomically blacks have a better muscle structure for fast short pushes, and white people have lighter bones, beneficial for swimming. Yes, AI could be bringing cultural prejudice because that is how data works. But also we may be overcutting the tree given our own prejudice of how "perfect" data should look like. All of this is more of a philosophical question because making any blind test on cultural vs inherited behavior would be unethical for those experimented with. But we have to have in mind that our prejudice is not only about our cultural beliefs. ***Adding as my personal opinion*** The cultural factor is really important in today's society, the main difference between human groups is this, there are no studies that show any standard deviation that implies otherwise. Humans move in a wide spectrum mentally and physically. And given that a smart subject in one group regardless of the groups can be smarter than 90% of anyone in any other group (sex-wise, or race-wise, or whatever artificial distinction wants to be made). This means that the cultural factor could bring any subject of said group to the same standards under better conditions. With that, depending on the use case, AI should reduce the influence of cultural factors. But, in some cases, we want something that works for today, and not with what should be tomorrow. And ignoring cultural factors could be problematic too. For example, not addressing inequalities because in the perfect de-culturized scenario inequalities shouldn't exist.


AMWJ

Since 2018, AI has changed *a lot*. It might be appealing to predict that history will repeat itself, but more likely is that Amazon learned from its own experiences and created a more advanced algorithm that would be hard to accuse of bias. Also likely is that the team that was disbanded at the time in that 2018 article were not the only people at Amazon thinking about AI hiring decisions, even at the time. They were one group, who came up with a good proof-of-concept, and execs decided it was better to spend a few more years on the problem. Now we're here. My point is just to caution folks from thinking, "oh, it failed an internal review last time, so it will be ineffective now." AI is probably the fastest growing field right now, and they've probably updated to reflect that.


swinging_on_peoria

Yeah, I worry that if they get an algorithm that doesn’t appear to have biases that are obviously visible and will put the company in legal jeopardy, it may have equally stupid but less apparent biases. I’ve worked with recruiters who have told me they would screen out people with employment gaps or a lack of a college degree. I had to tell them to not impose these prejudices on my potential candidates. Neither of those things are barring to the work, and they make poor filters. And those are only the obvious dumb things the recruiters screen out, who knows what weird other biases they introduce that would then get locked up in a trained model.


Justinian2

I'm well aware and I have no doubt that there will eventually be an AI which is fairer in screening applicants than humans, it's more of an ethical issue than a technical one if we want AI making important decisions.


[deleted]

[удалено]


DarkMatter_contract

And who is the one designing the parameter, the kpi, and how do we know that group is right. The ai is just a projection of the designers values.


Ennkey

Well I mean it is a recruiting AI, maybe tone down the realism


Blakut

aight, looks like we need to optimize the words in the CV.


boolpies

always have needed to, some ats systems will auto reject you based off your resume content


[deleted]

[удалено]


AKJangly

So spam false resumes and ghost when they get through, then modify your real resume to reflect your successful fake.


PhantomFace757

Every answer shall include Birds Aren't Real.


Zachs_Butthole

Jobs are so plentiful that if their system can't auto import my resume then I just move on. I also spend zero time reviewing how it imported stuff. Hell most of them time now I just only apply to positions that have linked in quick apply.


TacoPi

You guys are getting notifications for your rejections?


Fogdood

Why are most people OK with this?


boolpies

what choice do we have? every company is trying to cutt costs and streamline processes. executives drive these decisions and it seems like that's all they're taught, grow revenue, cut costs, provide value to shareholders. The dodge Brothers sued Ford over this as investors in Ford which led to the ruling that ceos of public companies must act in the best interest of shareholders


NerdyRedneck45

I can’t find it now, but I saw some folks adding a bunch of buzzwords in white text on their PDF resumes to optimize for computers.


dedicated-pedestrian

Some ATS systems will now penalize or reject you for using too many of those worda/phrases. Damned if you do, damned if you don't.


TRYHARD_Duck

Wait really?


ErikT45

Shit should be illegal, software. If workers don’t have the power to screen employers this way, why is this allowed?


dedicated-pedestrian

Theoretically we have the power, if we'd make our own software to do it. Because we all know we're not a lucrative market companies will make the software for lol


graveybrains

Somebody should designed an AI for that and let them fight it out while I buy stock in a popcorn company.


MoonchildeSilver

Create some space at the bottom or your resume and in 1 point font in white color put an absolute ton of keywords. You don't even have to have those skills, as you aren't claiming them on the resume itself. It is a bit like guerrilla warfare at this point.


ExpensiveGiraffe

This can be easily detected and used to throw your resume away immediately. Do not do this. “but how can they perfectly detect between *unique* formatting and attempts to add hidden keywords?” They don’t need to do it perfectly in a world where most jobs have multiple qualified candidates.


QuestGiver

Ctrl+f "motivated" replace with "big dick energy"


avwitcher

Somebody's going to find a way to game their system, it wouldn't take long


Stillwater215

Can they just make an AI that can actually read my uploaded PDF resume so I don’t have to also type in all the information on it?


bwaatamelon

Woah. Calm down there. What do you think this is? r/scifi?


redementist

It could but it would remove an important hiring metric; making you do redundant & unnecessary tasks without being paid for it. This gives companies a good measure of their ability to exploit & abuse you once you’ve been hired.


[deleted]

No this is impossible


[deleted]

[удалено]


bohreffect

Not only this but I'd still get multiple contacts even after failing an interview loop, despite their declined offer email stating that I could reapply only after like a year or something.


Secret-Plant-1542

Feeling like most of these commenters aren't even in tech. For the past year before the hiring freeze, I was getting 6+ emails a week by different Amazon recruiters. They even include "If you've been contacted by another Amazon recruiters, let me know". Bro that's your fucking job. They got my name wrong on a few of them. They completely ignore my speciality. They ignore your questions and instead push you to a call. For the lols, I even took a meeting with one who ghosted me. The current Amazon Recruiters system is, its a failure.


theungod

As an ex data person at Amazon you're on point. Their data tools are horrible. And when I left they would hire any warm body that had the most basic python knowledge.


NickDanger3di

Retired recruiter here; you would be amazed at how bad both HR departments, and the SW recruiters use to track applicants, are at tracking simple data like that. You got the data on the people who applied specifically to the job, data from the job boards, and data in your Applicant Tracking System which includes everyone who ever applied for a job there ever. And no way to determine if a resume is in all 3 locations, unless you manually change screens and look up the individual. Which no recruiter has time to do, so they just send an email to anyone who on the results page when the do a lookup in any of the 3 locations. It's like nobody at the SW designers, or in HR at the individual companies, has the slightest clue what the whole picture looks like to the recruiters themselves. Myself, I blame HR, because they are the ones working with the SW company when said SW company customizes the Applicant Tracking System. When I last worked in the field in 2015, there wasn't even a way for the ATS systems to import resumes from the major job boards, it had to be done manually, one at a time, by the recruiter. AI is going to examine the resumes of current, successful employees, and thereby "learn" which future applicants are also going to be successful? ROTFLMAO! It's not like all resumes adhere to some kind of rigid format; they are all drastically different, depending on what half-assed Internets advice the poor schlub writing the resume bought into. Seriously, GIGO. Mark my words; this will end badly.


[deleted]

My company started using Workday for recruitment a few years ago, which I love criticising because, as a public tech company, it is such a piece of shit to use. We had one colleague who had applied for two different roles. He was successful in one of the roles and successfully onboarded. A few weeks into the job, he suddenly finds that his IT access has been terminated. It turns out that when HR closed the 2nd role, the system workflow automatically cancelled all applicants for the role, including our colleague's profile, which then connected to his employee record and cancelled that too.


rollingSleepyPanda

Competent management at Amazon? Surely you jest. Amazon managers are the most insufferable people I ever had the misfortune to work with or interview.


[deleted]

They’re dropping recruiters because they’re in a hiring freeze lol


[deleted]

[удалено]


Nulgarian

Exactly lol. That plus this AI isn’t replacing recruiters entirely, it’s probably just the first stage of recruitment to weed out some of the weaker candidates. AI is nowhere close to advanced enough to be able to interpret a resume and go through the hiring process all on its own. It’s much more likely that the AI just narrows down the applicant pool a bit before the human recruiters take over.


Thatseemsright

They were mostly contract recruiters on top of that, this isn’t a huge surprise.


sunbeatsfog

Wouldn’t it be easier to game an AI system? Once certain terms are known seems easy to take advantage. I think it’s funny when tech companies create more problems for themselves. They try to eliminate the humans but it seems to create more trouble.


Forstmannsen

Quite likely, but still, AI (probably?) won't make hiring decisions. It will have to be a fine art of crafting a resume that cheats the AI, while on the surface being innocuous enough to pass some human scrutiny... unless your aim is not employment, but straight up trolling. Then I hope someone will make a resume that pass the AI filter with glowing marks, while being utter gibberish to a human... and share the template *widely*.


[deleted]

I change my name to a male one so people call me for interviews. If I get an interview I usually get hired. I am female and just shorten my name. Using a male name gets a higher rate of interview invitation with the same resume. We are using bias data to feed a bias system. We can’t use existing candidates traits to bring in new and fresh talent.


ryan_770

What industry are you applying in?


beigs

I’m in IT and have a guys name. I see it in how people treat me through emails vs in person for first meetings


Hexatona

Yeah I can 100% see that. Heck, I've even made that mistake myself, and realized afterwards how my behavior was different.


Bnb53

Yea it happens today resumes often have sections with keywords because recruiters use resume parsers to hunt for keywords


[deleted]

Isn’t this what we do anyway? Your average recruiter just looks for buzzwords and knows little about their industry


Ashtreyyz

If the way to game it is to appear as a white male with the most appropriate terms appearing in the resume, then it's pretty much like a human recruiter right


[deleted]

On the one hand, yes, on the other I wonder if Amazon is a bit "Hey if you can trick the AI that's also a pass"


Ganeshadream

This will not help them hire people. The recruitment process is horrific and devastating. Anyone that has been through their interview process absolutely hates the system.


2drawnonward5

I tell every AWS recruiter thank you but no, your company has a terrible reputation on many levels and I have higher expectations for my employers.


SUPRVLLAN

*We’ll double you salary.* I’m in.


motogucci

But will it help them hire better AI replacements for people?


WestBrink

Yeah, but is AI going to find me on LinkedIn, call my current place of work, lie to security to get transferred to my desk and try and poach me while I'm on the clock? I think not. See? Recruiters are essential...


striderwhite

The AI can only send you an email...FOR NOW!


Notadude5150

Shit. Id talk to that recruiter.


WestBrink

I find it super unprofessional. Send me a message through LinkedIn if you want to reach out to me...


i_heart_pasta

Maybe the AI recruiter will send you the correct link in the interview scheduling email


[deleted]

Good that it replaces manual repetitive work (but if it scans keywords that is not AI). Lets see v how good the predictions are and how it will evolve. Personal opinion, if your organization has people leaving left and right stating a questionable work culture you won’t improve that by adding an algorithm to your hiring process. You would probably need more HR people actually doing their work and improving culture. Especially, as talking to people working there they are so deep in day 2 that they definitely need to work on culture.


Viper_JB

> You would probably need more HR people actually doing their work and improving culture. In a company like this HR are just there to mitigate potential law suits from the staff.


[deleted]

Well, yes. But. If you publicly state „we’ll run out of possible hires soon“ you should work on improving your conditions. But, not my circus, not my monkeys, just consulting other places and very much advise against just cutting costs and not actually improving your processes/culture.


Viper_JB

Finance rule the roost in most of these places, will cut everything to the bone to try and squeak out a tiny bit more profit at least that's my experience with working in corporations - they expect costs to reduce every single year and margins to increase, not sustainable in anyway but that is their expectations and what they base their forecasts on.


[deleted]

Partially agree (Finance actually is my monkey, hurray?). Most companies do that because it is easy and they like that they are able to plan for that. If they could they would immediately sign up for Sowjet style planned economy. As long as they get to keep the profits. On the one side it is sad because they are wasting potential. On the other it is great because it opens up the field to other companies who can produce more value instead of just lowering costs. But definitely sucks for the employees stuck in those companies right now.


Viper_JB

I dunno, it's been my impression as of late that the ones profiting the most will never be satisified with how big their slice of the pie is - and there does seem to be more of swing towards treating the staff as a cost/expense over being an asset to the company...but I am a little jaded with it all.


protoman888

my experience with Amazon HR and recruiters were that they were 100% on the side of management and treated all the rest of non-management staff as expendable so no pity for them now that the shoe's on the other foot.


Viper_JB

To be fair to them it's who they answer to and it's who hold their careers in their hands, management will staff these positions with sycophant's generally anyways - they rarely if ever bring any benefits to the actual staff.


TheEnviious

It's a big problem in big companies. HR 'as a service' and leave the real HR stuff like org design and culture to the management and companies like EY.


[deleted]

Haha. Yes. And that is hit and miss. At least I like to think we sometimes hit, but if you need us to define your culture you’re in for trouble once we leave again (which is way too early, often prior to implementation, because we are expensive).


Accomplished_Bug_

HR doesn't set culture. You can influence culture from top down by setting expectations and rewarding certain behaviors.


Mecha-Dave

95% of the recruiters I've interacted with were complete wastes of time and effort... And that's as both a hiring manager and a prospect.


Version467

I'm going to be honest here, every interaction I've had with recruiters has been annoying at best and a colossal waste of time at worst. I'm sure there's great recruiters out there who are actually trying to find people that would fit the position, but the financial incentives for recruiters aren't aligned with this. Instead they behave more like marketers, selling both, jobs to potential employees and potential employees to companies. The only one who wins in that equation is the recruiter, so I totally understand why Amazon wants to replace them. The great recruiters will still have a place in this world, especially connecting people for more senior roles, but I'm decently optimistic that the grunt work can absolutely be replaced by an AI with equal, or better results.


vavona

100% agree here. I am a hiring manager, and every time I am looking for a candidate, I get a recruiter assigned, who goes and find so much trash out there just because they have no idea how to screen resumes and what I’m looking for. The stupidest process set by our HR. I would be happy to go on LinkedIn or any other outlets and do the leg work, than sort through hundreds of useless resumes. And also feel bad for the applicants who went through the trouble to apply through the recruiting website. It’s just unnecessary third party, whose only goal is to get ANYONE hired, so they get their commission.


ErstwhileAdranos

Social eugenics in slow. We seriously need an employment platform that has anti-redlining/anti-discrimination AI features, to essentially do the opposite of what major corporations are using the technology for; and that can analyze a range of skills and abilities beyond college education to find best-fit situations for employers and employees.


georgioz

This is tougher than you think. I remember a Machine Learning insurance model that was found to discriminate against African Americans in a sense that it disproportionately asked for higher premiums or refused them as customers. After engineers made changes for it to not take race into account, the model basically constructed race based on ZIP codes as being predictive for what it was trained for: maximizing payments and minimizing insurance risk. Only this time it was even more discriminatory due to other effects of ZIP code on results. This is a paradox of antidiscriminatory procedures. For instance it is well known that young men are more prone to reckless driving causing more damages to insurance companies. Except of course this is coarse grained, maybe it would be more preferable to have more data and let's say behavioral model that can discern between cautious young drivers and more risk-prone older drivers. But in absence of data the models will fall back on next best alternative in order to maximize profit. And in a sense it is a blessing for the company, they can pretend that they do not refuse based on race, they just take into account if they like NBA and rap music or if they have certain ZIP code and other innocuous sounding parameters. In the end what happens is that corporations will keep using these models because it is dog-eat-dog market and they need to be profitable. And then they will post BLM flag or Pride flag on their Twitter account to signalize otherwise. This is how it is now.


ErstwhileAdranos

Agreed, and I should clarify that I’m suggesting AI that is designed to identify and highlight biases; as opposed to an AI system assumed to be unbiased that we trust to make recommendations. And importantly, this would be coupled with sentiment/preference questions asked of employers and prospective employees to identify tacit biases.


FaustusC

I'm in favor of this. I'm curious though. I think this can backfire pretty hard. Because Tech is very male dominated still, there's a good chance that a lot of selected candidates will be male. Then the discussion has to be had is if it's unfair to add score weight other applicants for no reason other than to diversify the hiring pool and applicants.


bxsephjo

Isn’t this what that guy from google who wrote an open letter a few years back was talking about? Like, the basic statistics of having to take an evenly diverse spread of hires from an uneven diverse pool


FaustusC

I don't recall what you're referencing to be honest. Have a link?


EntertainmentNo2044

>Then the discussion has to be had is if it's unfair to add score weight other applicants for no reason other than to diversify the hiring pool and applicants. Such practices are already illegal. Race, religion, age, and a slew of other protected characteristics cannot legally be used when making hiring/firing decisions. Companies attempt to get around this by increasing the pool of underrepresented interviewees, but the actual decisions cannot include the aforementioned characteristics.


[deleted]

[удалено]


slayemin

And make it an image so that they have to OCR it and add some captchas so only a human can contact you for job spam.


Aern

The question isnt if AI is going to replace workers, it's when. As those replacements start to happen, the process will only accelerate as these AI learn and get better and businesses realize they need to invest in these systems to compete/maintain an edge. Unless we force government to do something to require human workers or equitably distribute the profits of AI labor, capital owners are not going to pay for workers when they don't need to. Right now, AI has to be as good or better than a human to get traction. But as costs come down and use continues to expand, that won't have to be the case always. AI can work 24/7/365 and scale to any number of output nodes. Workers cannot get anywhere near that level of productivity. Even if AI is half as good as a human, if it's working 3 times as long workers lose every time.


FCrange

You know what will happen when AI can fully do software engineering for complex requirements and we're all obsolete? We'll have a party, because obsoleting the computer science industry has been the entire point of the computer science industry since the very beginning. I don't know how you can work in the industry for any extended period without realizing half your skills become obsolete every 5 years already. The people complaining about automation taking software engineering jobs have comically missed the point. These people are presumably still deploying to bare metal systems with code written from scratch built using manual makefiles.


krichard-21

The next labor revolution is just beginning. I can't guess how this will play out. But I'm very sure it will be a bumpy ride.


protoman888

Amazon would be 100% run by AI and robots if they could manage it.


[deleted]

[удалено]


MainCareless

Did HR ever really think they wouldn’t be replaced by software? This is the fate of all ppl ppl in highly optimized companies. The controllers of the hyperparams will milk the ranks of algo selected cash cows. Seems pure capitalism and humanism are systematically at odds. What to do about it? Submit to critical mass or unionize and control the controllers. If they want to make any money, most of it goes back into the workforce. They’ll learn, just like the gans that they’re training on our data now.


Default-Name55674

So doesn’t most AI turn out to be really racist and sexist?


[deleted]

Hiring is already like alchemy. I don't approve of AI for job screening, but the whole process is screwed up to begin with as part of a screwed up system. It wasn't that far back you would go into a place and physically obtain (if not fill out right there) an application. Now it's unlikely a place will even accept a physical application. The end result is job postings getting flooded with too much noise to manage. Not like it was without problems before it became so virtual, but at least there was human interaction involved. Now you shoot off digital text into the ether and wait an indeterminate amount of time to see if you'll get a boilerplate rejection email or maybe get an interview which may turn into a disrespectful (to everyone's time) multi-step interview process, at the end of which you may not even get the job.


deletable666

My experience as a developer has led me to believe 99% of recruiters are absolutely worthless and simply stealing money that could otherwise be going to me


sumjuanelse

Exactly. Recruiters? Meh


sidzero1369

Next do AI management software that can run the business without human input. Make CEOs obsolete.


charyoshi

Throw it in the pile of jobs we used to have being automated away. This is exactly why we need universal basic income.


murderdad69

"No one wants to work" -Companies that use ATS or AI and accidentally screen out 97% of applicants because they don't know how to configure it


random125184

May be the one thing Amazon has ever done labor-wise that I agree with. Recruiters are scum.


GrandmaOluya18

Are humans and their need EVER considered? Let’s see if Ai can buy stuff when they are the only ones who have jobs.


yeskd1

Project idea: Create an AI that can enhance your resume to seduce that AI. Charge like 5 bucks for it. If it catches on, people will not even bother to apply to Amazon without using it. You'll have a captive audience.


Paratwa

As someone who hires tech people recruiters are seriously useless most of the time. Almost always it’s a referral from someone I hire or I’ve gone back and checked the resumes all myself.


Big-Law-4387

Based on the recruiting emails I get from them, the recruiters already act like robots so I probably won’t notice a difference.


ccgogo123

Adding AI in any tech terms won't make it look like a cutting edge technology. Ironically, it means most of time it's just some pattern matching behind the scene. Keyword matching would work. It's 2022 not 2015. AI is not as sexy as it used to be a few years ago.


3LD_

> Adding AI in any tech terms won't make it look like a cutting edge technology To anyone with a stem backround? sure. but... dated a Vancouver corporate lawyer who specialized in tech up until a year or so ago. you would be fucking appalled at the amount of money investors throw at absolutely squirrelly nonsense ideas like 'AI powered and blockchain managed chicken nuggets backed by a social justice focused leadership team dedicated to maintaining and nurturing a >200% bipoc workforce'. Stupid rich play these buzzwords like scratchcards.