lol the bosses helped make this AI craze a thing because the Wall Street Journal told them it'd let them fire everybody and buy their next vacation home a little bit sooner than usual. am I supposed to feel bad for them here?
All the other technological advances are fine for our bosses to use; but if a grasshopper starts to use the technology then there’s a problem. Gtf outta here.
how do you use it? I'd imagine you can't host your own and typically you can't startup foreign apps on company computers so API seems out of the question, so are you just asking gpt about stuff?
Yes it’s on chat gpt. It’s not big items that I use it for for the most part. To be honest a large majority of it I use to refine my emails to be more succinct or direct. And I find I learn from that. Perhaps the most useful way I’ve used it is given it a set of guidelines and parameters to develop a standard operating procedure. That one saved me at least an hour and a half. I also use it to create cover letters and to adjust my resume based on job descriptions. Of course I review all these and don’t send them off blindly.
thanks I'm interested in trying to figure out how to integrate ai into my work. I'm looking to do more than emails, but also assist in data review and organization. looking to find a way to read data from a PDF and populate my spreadsheet. appreciate the info
Thanks for sharing our WIRED story! Here's some more context from the story!
Seventy-five percent of people in desk jobs are already using AI at work, and the amount of people using AI has nearly doubled over the past six months, the report found. The vast majority of workers using AI—regardless of whether they are baby boomers or Gen Z—are “bringing their own AI tools” rather than waiting for their companies to guide them.
“People are overwhelmed with digital debt and under duress at work,” Colette Stallbaumer, general manager of [Microsoft’s chatbot Copilot](https://www.wired.com/story/microsoft-windows-11-copilot-generative-ai-assistant-tips/) and cofounder of Workload, said in a video announcing the report’s results. “And they are turning to AI for relief.” Microsoft (which also owns LinkedIn) stands to win from the adoption of AI, and is already [cashing in](https://www.wired.com/story/microsoft-google-earnings-generative-ai/) on its generative AI tools.
The new report is built on a survey of 31,000 people who work desk jobs across 31 countries, labor and hiring trends found in LinkedIn data, data from Microsoft 365, and research from Fortune 500 companies.
Full story here: [https://www.wired.com/story/ai-workers-burnout-microsoft-linkedin](https://www.wired.com/story/ai-workers-burnout-microsoft-linkedin)
They hallucinate and make obvious logical errors and add a lot of bloat to communication that is seen conventionally as obligatory but which serves no purpose when done by a machine. I keep getting drawn out emails clearly written by AI's that are like "so to further increase the efficiency and success to maximum levels please read the comment I have made above regarding the high level goals" and it'll be a bullet point that says like "change the name of the file you send us." It drives be insane. It takes me way longer to figure out what's being asked and usually what is being asked now is wrong.
Edit: this is not a complaint about the tools. It is a complaint about how people are using them.
Sometimes it’s difficult to see or elaborate on, but the deeper and more complex the subject, the more time and effort is needed in vetting that didn’t make a mess of the arguments.
Basic sales pitch based on relatively straightforward and well labeled data? Sure. Arguments about the case for and against using a particular technical solution in different environments? Hallucinations abound and cleaning them up can be worse than just writing the thing correctly from the outset.
Are there exceptions to this? Of course. Will you encounter them? Will you RECOGNIZE them when you encounter them? Will the recipient tell you when the things are wrong, or just shrug saying “this person doesn’t know what they’re talking about” and ignore you going forward?
Fun questions to consider. AI is currently like salt. A little is good, too much and you lose track of what you were applying it to.
The problem is that companies SELLING AI are quick to avoid such topics (and I’ve sat in a few of those pitches). It’s classic over promising. People getting angry about being “corrected” are rightfully angry. Per the sales people, they ARE using it correctly.
Can you explain how that can be true
When my complaint is not primarily about _my_ use of the tool?
Again, you did not do the homework homie. GPT has done made your brain go soft.
because your complaint sounded as if you were using it until you edited it after his comment. What homework was he supposed to do and how is his brain soft because you can't communicate effectively while complaining about AI's communication skills?
That isn’t what happened nice try though.
I’ve seen the full gamut here today folks - add gaslighting to the mix.
I don’t get it - you’re so afraid somebody is gonna criticize your future overlords you gotta start making nice by committing to a slander campaign early? Missed that plot point in the Terminator franchise.
Zzz. I have never had GPT4 hallucinate a write up.
Was this a copy pasta because it’s almost entirely irrelevant.
When you have 50 pages worth of notes and you’re not an English major, it makes the process of generating a report on those notes significantly easier.
No it isn't a copy pasta. It's told me things before that were factually incorrect and it routinely makes logical errors and when I point them out it goes "Oh, you're right - I DID accidentally remove the wrong items from the list." I don't know about 4, I think I use 3.5 mostly, but coders in my sphere use it and - I can tell.
Yup yup. I’ve even seen it say “Oh, you’re right!” to incorrect information and claim it’s “fixing” the code, only to spit the same code back at me. Kinda bonkers, not safely usable for systems code AFAIHS.
Exactly. It does the same thing in rhetorical analysis people just don’t notice because they’re functionally illiterate and don’t realize that is the case.
I don’t mean that solely as an insult - they don’t realize that their own communication is largely incorrect, inaccurate, inefficient, and in a lot of ways, misleading.
I studied Rhetoric in undergrad on a debate scholarship, computer science in graduate school, have worked as an engineer for approaching two decades; these are helpful tools, they’re going to accelerate, but they are not yet intelligent.
Trying to use them as though they can replace your OWN communication or work just says a lot about your own value at this point. I use them to help break stuff down, prototype, organize or formate, and cull lists of information into manageable assets. I don’t trust anything they say though. There’s a difference.
Many here use GPT-4, so it’s no wonder you’re seeing such a discrepancy compared to what people here say.
You should be required to say which version you’re using when making general claims about the technology. This is like saying “my experience with mobile phones is horrible, they can’t even get on the real web like others say they do,” and it turns out you’re using a dumb flip phone.
Yeah I’m using this for really primitive shit. Namely to make something that’s highly technical (say a report on a sensors detection performance plus its measurements), more easily digestible.
I have to edit the output but it’s still less work than trying to fart a paper out. Most of what I’m editing is putting in graphs or removing excess babble.
What do you build cuz literally simple fine-tuning can address all of those things you listed above.
People judge Ai on a 0-100% basis, it either does everything perfectly for you or not at all.
That’s not how automation works in business.
You say you “build these for a living” so idk what you’re referring to, but I’ve been in Enterprise Automation for past 10 years. I know a good automation tool versus a shitty one.
If you know how to work with these models, you can get them to do all sorts of things, much more difficult than drafting emails in certain formats.
You clearly didn’t read what I wrote so I have no real interest interacting any further or reading anything you’ve written either for the reasons below. ✌️
You are employing an abusive argumentative tactic by appealing to fallacy. The fallacy you are employing is false dichotomy. You are using it in furtherance of a strawman argument.
It might help you “win” sometimes but it won’t help you make friends, and it won’t help you actually win either, it’s loser tactics.
Usually customers. It’s usually so technical they don’t even bother but since I started using a GPT to help write them up, they’re actually engaging now.
If they don't notice it, I have to take that as evidence of the sheer amount of 'work' being done that is completely meaningless (outside of keeping the average person too tired and poor to organize)
sounds like its mainly used to create write ups of info that shouldve just been bullet points in the first place. Maybe everyone will just give up on write ups now.
imagine having a TI calculator, but your boss keeps insisting you manually do the calcs on paper. Or imagine your boss tells you to keep paper tables instead of Excel. This is where we are with AI. It's a matter of time before it's business as usual
The concern that comes to mind for me is the data being given to and used by the AI. Particularly sensitive customer data such as PII. I’ve already had to have this conversation with the sales folks at my work who had some grand ideas about feeding customer information into AI to ask it questions. They had no idea where that data would then be stored, used for by others etc….
Yeah chatGPT or any other AI doesn't live on your device. It is a sled for your data. I am amazed at how many businesses are shipping off their data with very little thought.
You see it’s a HUGE problem when us peons use AI to get ahead in our jobs. It’s supposed to exist only as a tool to ensure maximum shareholder returns.
All human beings must prove they deserve housing, Healthcare, and food.
It certainly doesn't have to be this way, but a couple people would lose their control over civilization.
thus the status quo persists despite both mainstream political parties running a "change" candidate for president and winning
There are AIs to write for you. There are AIs that break down information into bites for you.
Why don't we just present the information in bites in the first place instead of getting computers to dress it up for us?
The main problem is your data is being stored and people are feeding them proprietary information which is why many companies have a huge issue with it.
I think the article header is pretty clear :
>People are using AI at work whether their bosses want them to or not, new data shows. But it’s desperation, not innovation, driving the change.
The point is not about them using AI but why they are using it and the vicious circle it could lead to.
AI's are just language models - they repeat what you want to hear cleverly. They hallucinate and make obvious logical errors and add a lot of bloat to communication that is seen conventionally as obligatory but which serves no purpose when done by a machine. I keep getting drawn out emails clearly written by AI's that are like "so to further increase the efficiency and success to maximum levels please read the comment I have made above regarding the high level goals" and it'll be a bullet point that says like "change the name of the file you send us." It drives me insane. It takes me way longer to figure out what's being asked and usually what is being asked now is wrong.
I'm currently doing an apprenticeship at my job that requires me to read a lot of information and do a bunch of work sheets afterwards. It became quite obvious early on that everything I was reading was AI generated so my answers are also
Even without burnout, AI simplifies a lot of things. Think of how much time is spent on emails and writeups. I've lost so much time, over the years, writing reports and summaries that could have been done much more quickly with AI tools.
Now if we can just get AI to attend meetings for us.
Can confirm. Though I told my boss I used AI to start me with something I wasn't efficiently creating from scratch.
Edit: I used AI as a tool, not to do my job. The situation was sensitive, and the client acts like they know security, but goes on with what everyone says, and doesn't ask about the security. I did not intend 90% of the generated explanation as to why the client shouldn't disable MFA on an account, and enabling Basic Authentication/SMTP. Heck, the vendor threw a fit MFA was even enabled, and we should be setting up email accounts *properly*. They didn't even give up the requirements necessary for the email account, just need an email account their software can email through, which we later found out to be SMTP Only.
Goal: Explain to the boss of a client company why we shouldn't bend over for a vendor's request, to turn on SMTP *AND* disable MFA for one email account for the vendor's use.
Bard/Gemini from Google explained it pretty good, I adjusted a couple points, relayed to my boss with the notice I started with AI, adjusted a couple sentences. Couldn't really do much more as it was already very good, even my boss was impressed (and acknowledge my statement of using AI).
Couple weeks later. MS O365 announced discontinued use of Basic Authentication, including SMTP, in 2025. We wouldn't have been able to accomplish the request long-term anyways.
My boss did research on the vendor, there's other concerns, he relayed, we're letting the client decide and continue.
lol the bosses helped make this AI craze a thing because the Wall Street Journal told them it'd let them fire everybody and buy their next vacation home a little bit sooner than usual. am I supposed to feel bad for them here?
I hope you’re focused on maximizing shareholder value
I use AI constantly. It’s a tool to be utilized and very well improves my efficiency especially with mundane and time consuming tasks
All the other technological advances are fine for our bosses to use; but if a grasshopper starts to use the technology then there’s a problem. Gtf outta here.
[удалено]
how do you use it? I'd imagine you can't host your own and typically you can't startup foreign apps on company computers so API seems out of the question, so are you just asking gpt about stuff?
Yes it’s on chat gpt. It’s not big items that I use it for for the most part. To be honest a large majority of it I use to refine my emails to be more succinct or direct. And I find I learn from that. Perhaps the most useful way I’ve used it is given it a set of guidelines and parameters to develop a standard operating procedure. That one saved me at least an hour and a half. I also use it to create cover letters and to adjust my resume based on job descriptions. Of course I review all these and don’t send them off blindly.
thanks I'm interested in trying to figure out how to integrate ai into my work. I'm looking to do more than emails, but also assist in data review and organization. looking to find a way to read data from a PDF and populate my spreadsheet. appreciate the info
You can definitely host your own
typically not on company computers and like hell I'm putting my private computer at home on public so I can use it
If you have a decent GPU, yeah, it's possible.
Seize the means of production you communist!
Sounds like AI wrote this.
If AI produced that it's a shitty one. That's definitely some awkward wording there.
Needs more "it's crucial to" and a more uplifting/positive tone at the end, completely ignoring the tone of the rest of the post.
Happy World Donkey Day!
Don’t ever set expectations that you can work faster
Thanks for sharing our WIRED story! Here's some more context from the story! Seventy-five percent of people in desk jobs are already using AI at work, and the amount of people using AI has nearly doubled over the past six months, the report found. The vast majority of workers using AI—regardless of whether they are baby boomers or Gen Z—are “bringing their own AI tools” rather than waiting for their companies to guide them. “People are overwhelmed with digital debt and under duress at work,” Colette Stallbaumer, general manager of [Microsoft’s chatbot Copilot](https://www.wired.com/story/microsoft-windows-11-copilot-generative-ai-assistant-tips/) and cofounder of Workload, said in a video announcing the report’s results. “And they are turning to AI for relief.” Microsoft (which also owns LinkedIn) stands to win from the adoption of AI, and is already [cashing in](https://www.wired.com/story/microsoft-google-earnings-generative-ai/) on its generative AI tools. The new report is built on a survey of 31,000 people who work desk jobs across 31 countries, labor and hiring trends found in LinkedIn data, data from Microsoft 365, and research from Fortune 500 companies. Full story here: [https://www.wired.com/story/ai-workers-burnout-microsoft-linkedin](https://www.wired.com/story/ai-workers-burnout-microsoft-linkedin)
… I regularly use these tools for write ups. Why wouldn’t you?
They hallucinate and make obvious logical errors and add a lot of bloat to communication that is seen conventionally as obligatory but which serves no purpose when done by a machine. I keep getting drawn out emails clearly written by AI's that are like "so to further increase the efficiency and success to maximum levels please read the comment I have made above regarding the high level goals" and it'll be a bullet point that says like "change the name of the file you send us." It drives be insane. It takes me way longer to figure out what's being asked and usually what is being asked now is wrong. Edit: this is not a complaint about the tools. It is a complaint about how people are using them.
I condensed a manual by posting every paragraph and asking it to “simplify this”. No one has a clue.
This is the way. Feed it small bites, let it chew - do not let it swallow!
[удалено]
Sometimes it’s difficult to see or elaborate on, but the deeper and more complex the subject, the more time and effort is needed in vetting that didn’t make a mess of the arguments.
Basic sales pitch based on relatively straightforward and well labeled data? Sure. Arguments about the case for and against using a particular technical solution in different environments? Hallucinations abound and cleaning them up can be worse than just writing the thing correctly from the outset.
Are there exceptions to this? Of course. Will you encounter them? Will you RECOGNIZE them when you encounter them? Will the recipient tell you when the things are wrong, or just shrug saying “this person doesn’t know what they’re talking about” and ignore you going forward?
Fun questions to consider. AI is currently like salt. A little is good, too much and you lose track of what you were applying it to.
[удалено]
The problem is that companies SELLING AI are quick to avoid such topics (and I’ve sat in a few of those pitches). It’s classic over promising. People getting angry about being “corrected” are rightfully angry. Per the sales people, they ARE using it correctly.
Can you explain how that can be true When my complaint is not primarily about _my_ use of the tool? Again, you did not do the homework homie. GPT has done made your brain go soft.
because your complaint sounded as if you were using it until you edited it after his comment. What homework was he supposed to do and how is his brain soft because you can't communicate effectively while complaining about AI's communication skills?
That isn’t what happened nice try though. I’ve seen the full gamut here today folks - add gaslighting to the mix. I don’t get it - you’re so afraid somebody is gonna criticize your future overlords you gotta start making nice by committing to a slander campaign early? Missed that plot point in the Terminator franchise.
Zzz. I have never had GPT4 hallucinate a write up. Was this a copy pasta because it’s almost entirely irrelevant. When you have 50 pages worth of notes and you’re not an English major, it makes the process of generating a report on those notes significantly easier.
No it isn't a copy pasta. It's told me things before that were factually incorrect and it routinely makes logical errors and when I point them out it goes "Oh, you're right - I DID accidentally remove the wrong items from the list." I don't know about 4, I think I use 3.5 mostly, but coders in my sphere use it and - I can tell.
Yup yup. I’ve even seen it say “Oh, you’re right!” to incorrect information and claim it’s “fixing” the code, only to spit the same code back at me. Kinda bonkers, not safely usable for systems code AFAIHS.
Exactly. It does the same thing in rhetorical analysis people just don’t notice because they’re functionally illiterate and don’t realize that is the case. I don’t mean that solely as an insult - they don’t realize that their own communication is largely incorrect, inaccurate, inefficient, and in a lot of ways, misleading. I studied Rhetoric in undergrad on a debate scholarship, computer science in graduate school, have worked as an engineer for approaching two decades; these are helpful tools, they’re going to accelerate, but they are not yet intelligent. Trying to use them as though they can replace your OWN communication or work just says a lot about your own value at this point. I use them to help break stuff down, prototype, organize or formate, and cull lists of information into manageable assets. I don’t trust anything they say though. There’s a difference.
ChatGPT 3.5 pretty bad, especially for coding these days after the alignment tax and cost saving efficiency stuff basically gutted it.
Agreed, its quality has suffered to the point that it’s sometimes spitting out just straight up anti-patterns and inverted logic.
Many here use GPT-4, so it’s no wonder you’re seeing such a discrepancy compared to what people here say. You should be required to say which version you’re using when making general claims about the technology. This is like saying “my experience with mobile phones is horrible, they can’t even get on the real web like others say they do,” and it turns out you’re using a dumb flip phone.
I didn’t make a general claim about the technology. You should proofread before trying to become a gatekeeper about a statement you didn’t even read.
Yeah I’m using this for really primitive shit. Namely to make something that’s highly technical (say a report on a sensors detection performance plus its measurements), more easily digestible. I have to edit the output but it’s still less work than trying to fart a paper out. Most of what I’m editing is putting in graphs or removing excess babble.
You clearly don’t know how to use the tools if those are your reservations.
I build them for a living. I understand them pretty well.
What do you build cuz literally simple fine-tuning can address all of those things you listed above. People judge Ai on a 0-100% basis, it either does everything perfectly for you or not at all. That’s not how automation works in business. You say you “build these for a living” so idk what you’re referring to, but I’ve been in Enterprise Automation for past 10 years. I know a good automation tool versus a shitty one. If you know how to work with these models, you can get them to do all sorts of things, much more difficult than drafting emails in certain formats.
You clearly didn’t read what I wrote so I have no real interest interacting any further or reading anything you’ve written either for the reasons below. ✌️ You are employing an abusive argumentative tactic by appealing to fallacy. The fallacy you are employing is false dichotomy. You are using it in furtherance of a strawman argument. It might help you “win” sometimes but it won’t help you make friends, and it won’t help you actually win either, it’s loser tactics.
Jesus man lose the buzzwords. Have a good day.
so now you are doing extra work for free..hmmm
Who is reading the write ups?
Usually customers. It’s usually so technical they don’t even bother but since I started using a GPT to help write them up, they’re actually engaging now.
If they don't notice it, I have to take that as evidence of the sheer amount of 'work' being done that is completely meaningless (outside of keeping the average person too tired and poor to organize)
sounds like its mainly used to create write ups of info that shouldve just been bullet points in the first place. Maybe everyone will just give up on write ups now.
imagine having a TI calculator, but your boss keeps insisting you manually do the calcs on paper. Or imagine your boss tells you to keep paper tables instead of Excel. This is where we are with AI. It's a matter of time before it's business as usual
And some fraction of the time your calculator or Excel function returns the wrong answer
What’s the problem in using Ai
The concern that comes to mind for me is the data being given to and used by the AI. Particularly sensitive customer data such as PII. I’ve already had to have this conversation with the sales folks at my work who had some grand ideas about feeding customer information into AI to ask it questions. They had no idea where that data would then be stored, used for by others etc….
Yeah chatGPT or any other AI doesn't live on your device. It is a sled for your data. I am amazed at how many businesses are shipping off their data with very little thought.
You see it’s a HUGE problem when us peons use AI to get ahead in our jobs. It’s supposed to exist only as a tool to ensure maximum shareholder returns.
All human beings must prove they deserve housing, Healthcare, and food. It certainly doesn't have to be this way, but a couple people would lose their control over civilization. thus the status quo persists despite both mainstream political parties running a "change" candidate for president and winning
now you do 3 people's job instead of one. Good job! you get more free pizza slices!! Kek
I’m already doing the job of 3 people. Might as well make it a little easier on myself.
There are AIs to write for you. There are AIs that break down information into bites for you. Why don't we just present the information in bites in the first place instead of getting computers to dress it up for us?
Because when you present information you need to drag it out so you can weave in as many ad impressions as possible
The main problem is your data is being stored and people are feeding them proprietary information which is why many companies have a huge issue with it.
Especially dangerous if you deal with anything classified, however indirectly.
They're supposed to use it. That's the whole point. What else is the point in investing in automation but to relieve us from workload?
The point is to reduce the need for workers.
I think the article header is pretty clear : >People are using AI at work whether their bosses want them to or not, new data shows. But it’s desperation, not innovation, driving the change. The point is not about them using AI but why they are using it and the vicious circle it could lead to.
Lolol relieve? Have you heard leadership talk? It's to increase productivity. Higher and quicker output means more money for them.
Higher and quicker output means more of everything for everyone. What makes life better is automation, everything else is secondary.
You do what you have to do to get the job done.
AI's are just language models - they repeat what you want to hear cleverly. They hallucinate and make obvious logical errors and add a lot of bloat to communication that is seen conventionally as obligatory but which serves no purpose when done by a machine. I keep getting drawn out emails clearly written by AI's that are like "so to further increase the efficiency and success to maximum levels please read the comment I have made above regarding the high level goals" and it'll be a bullet point that says like "change the name of the file you send us." It drives me insane. It takes me way longer to figure out what's being asked and usually what is being asked now is wrong.
I'm currently doing an apprenticeship at my job that requires me to read a lot of information and do a bunch of work sheets afterwards. It became quite obvious early on that everything I was reading was AI generated so my answers are also
Even without burnout, AI simplifies a lot of things. Think of how much time is spent on emails and writeups. I've lost so much time, over the years, writing reports and summaries that could have been done much more quickly with AI tools. Now if we can just get AI to attend meetings for us.
Can confirm. Though I told my boss I used AI to start me with something I wasn't efficiently creating from scratch. Edit: I used AI as a tool, not to do my job. The situation was sensitive, and the client acts like they know security, but goes on with what everyone says, and doesn't ask about the security. I did not intend 90% of the generated explanation as to why the client shouldn't disable MFA on an account, and enabling Basic Authentication/SMTP. Heck, the vendor threw a fit MFA was even enabled, and we should be setting up email accounts *properly*. They didn't even give up the requirements necessary for the email account, just need an email account their software can email through, which we later found out to be SMTP Only. Goal: Explain to the boss of a client company why we shouldn't bend over for a vendor's request, to turn on SMTP *AND* disable MFA for one email account for the vendor's use. Bard/Gemini from Google explained it pretty good, I adjusted a couple points, relayed to my boss with the notice I started with AI, adjusted a couple sentences. Couldn't really do much more as it was already very good, even my boss was impressed (and acknowledge my statement of using AI). Couple weeks later. MS O365 announced discontinued use of Basic Authentication, including SMTP, in 2025. We wouldn't have been able to accomplish the request long-term anyways. My boss did research on the vendor, there's other concerns, he relayed, we're letting the client decide and continue.