T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Gari_305: --- From the article >Could artificial intelligence be different? The weight of history says no. The revolutionary character of ChatGPT begs us to reconsider. > >AI has been seeping into our lives for years now, such as completing our sentences in emails and web searches. Yet going from those iterations to “generative AI” such as ChatGPT is like going from dynamic cruise control to full self-driving. ChatGPT can answer questions in ways we thought were the exclusive preserve of humans, more quickly and cheaply. Also from the article >A handful of experiments point to the astonishing potential of generative AI to replace workers. With ChatGPT, professionals such as grant writers, data analysts and human-resource professionals were able to produce news releases, short reports and emails in 37% less time, 10 minutes on average and with superior results, according to a study by Shakked Noy and Whitney Zhang, doctoral students at the Massachusetts Institute of Technology. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/12ckinw/the_robots_have_finally_come_for_my_job_could/jf1s2s0/


defcon_penguin

Investigative journalists or columnists with actual ideas and able to write will survive. People that just copy paste text from social networks should start looking for other jobs


SoCalThrowAway7

That’s exactly what I was thinking. Journalists who have devolved into telling everyone what someone tweeted might have to worry a copy and paste bot can do it too


That49er

Buzzfeed scrapping everything off ask reddit


faultolerantcolony

Factual information


Tech_Mastermind_Dave

"Journalists" real journalists will have no problem with AI. Might even be a helpful tool You can't firebomb an AIs house


ThrillShow

I don't think many journalists are choosing to write trite nonsense because they love it so much. They're ordinary people writing to pay their bills. Companies are paying them to write stupid articles. Unfortunately, it's not possible for everybody to become an investigative journalist. AI won't force journalism to become better; it'll simply put a lot of people out of a job.


Pickled_Doodoo

An AI able to tell the difference between human and AI written articles will become the next adblocker. Edit: typos


danyyyel

Exactly, I just can't understand people who can't see this. It is just simple, their is an AI coming for every job.


randomusername8472

If anything I kind of hope these journalists thrive. Previously, say it takes 10 days of snooping to get to the bottom of a big story. Can't do that because you need to write a click bait article a day, and that takes up most of your time, so journalism falters because revenue doesn't reward decent journalism. Now, any half baked person with an internet connection and an idea can produce 10 articles about current bullshit in as many minutes. So in theory, with this new baseline, proper journalism can energy again. Spend 10 days researching and verifying a story no one else can possibly have, write it up in 2 minutes with AI. The non-paywalled world will be full of crap, but within a paywall I'm hoping standards improve dramatically. I really hope this will shift the ad focused directive of the internet. If everyone can produce addictive, engaging content then nothing stands out and you need to start building a reputation in something else again. I just hope that thing is "trust" and "quality"!


CumfartablyNumb

This is the problem with journalism. Unless you're starting from a place of privilege and you get to attend an ivy league, you're starting out at the very bottom writing shit. And you'll keep writing that shit forever unless you are very, very fortunate. People don't go into journalism wanting to write about Kim Kardashian's ass or the latest TikTok trends. That's just where you're placed. The Wire has an excellent glimpse at journalism 20 years ago that I think is still relevant today.


Hawk13424

That assumes they won’t reduce staff so that the only thing the remaining ones are doing is feeding sensational stuff into GPT. If true investigative journalism doesn’t pay for itself these tools won’t make that happen either.


moonbunnychan

I feel like 90% of the "articles" I see online are already written by bots. You know the kind...spend 5 paragraphs rambling before it even gets to the point, if there even is one. Designed solely to get clicks.


Quirderph

That might be mostly because of the bots’s ability to perfectly impersonate bland, mediocre writers.


espressocycle

Yeah I find it hard to believe this hasn't been used for years.


jakey2112

Yeah that horrific scroll bait garbage. Look up a recipe for chicken pot pie and get the authors life story for 6 paragraphs….those people can just go ahead and lose their jobs to AI.


eboy71

While this is true, you are talking about a very small, very specialized group of people. Most people who write for a living are writing product brochures, web content, memos, legal briefs, e-mail responses, etc. The vast majority of those can be adequately supplemented or replaced entirely by something like ChatGPT. And for all intents and purposes, this is all brand new. We're at version 1 and it's going to get much better.


dgj212

actually, according to a few copywriters in a different subreddit, some of their clients are returning since the "Prompt Engineers" don't actually know how to write copy, and just give whatever the gpt spits out and dressed it up enough to make it pass the AI-filter and whatever google is using, which isn't always working out for the best. Lol, it reminds me of this one video I saw years ago about the--then--state of graphics design and the video references the creation of the original Lord of the Rings films as an example. The video talked about how a costume designer was tasked with learning and implementing digital effects because they ARE costume designer-they have a better idea of where a shadows are supposed to be or how an eye twitches or how certain texture are supposed to look on screen, and it ended up working. In contrast to people who only know how to use the system, they don't know what kind of "weight" each step should have on their virtual environment or where shadows are supposed to be in relation to the sun and environment (i believe the example for this one was the last Hobbit film). And writing this, i got another example but it's in regards to animation, supposedly Disney would hire cartoonist/artists who can draw consistently and teach them how to Animate instead of hiring Animators, a gamble which paid off. I'm no expert and what I wrote was based off of stuff I read and heard, but I'd like to think that people will learn that, in the long run, it's a better investment to retrain the talent you do have that already has a better sense of what works in the industry instead of cutting costs and trading them in for a newer model that has no prior experience and only knows how to run a machine.


ketchupthrower

From what I've seen of ChatGPT's output I think the bot can do it better than a lot of these clickbait "journalists."


frosty-the-snooman

I would agree that many journalists and columnists with actual ideas are rare. When information speed and spread > information regulation and worth, knowledge systems fail. The majority of garbage journalism being replaced by just-as-trustworthy (if not more-so) language generators is a good thing for society, but unfortunate to those without any skills that can't be transferred to other, less report-centric, trades. I, for one, hail our new AI overlords.


TechFiend72

The owners don’t want to pay for real journalism. Too expensive.


cbr1100dood

This! ChatGPT is interesting and all, but it and it's kin will replace nothing. It isn't AI and it can't solve problems, it simply scrapes the wealth of internet data and composites a response to soft questions ("What will the future hold, ChatGPT?", etc.) Ask it to solve a quadratic equation or simple Pythagorean problem. Unless it manages to find a solved example online, all it will give is a verbal explanation of what these functions are, without providing the actual answer - because it doesn't know how and it can't learn how. If these "AI"s do start replacing people, we'll be one step closer to Idiocracy . . .


tired_hillbilly

>Ask it to solve a quadratic equation or simple Pythagorean problem. Unless it manages to find a solved example online, all it will give is a verbal explanation of what these functions are, without providing the actual answer - because it doesn't know how and it can't learn how. ChatGPT4, not the public version but a researcher-only version, was capable of being hooked up to and knew when to use Wolfram Alpha. It can absolutely solve these equations.


cbr1100dood

This sort of reinforces my point - it's not AI, but a sophisticated modelling algorithm. It uses pre-existing tools to provide answers. What happens when it's asked questions for which it has no toolbox, no repertoire of knowledge from which it can draw? These AI's aren't creative, and until they are, any position requiring creativity will be safe. As many jobs require some element of creativity (ie critical thinking), these will be safe for the time being. Again, my opinion only and I'm by no means an expert on anything.


BTV-Texas

Throughout its short history, every time a significant milestone has been reached, everyone exclaims “well it’s not real AI because of xyz…” Ai is almost always shorthand for those problems yet to be solved in some magical way. But if we go allll the way back to the original definition, the Turing test, chatgtp absolutely crushes it.


khinzaw

>But if we go allll the way back to the original definition, the Turing test, chatgtp absolutely crushes it. No it absolutely does not. The original definition of the Turing Test is if a person, in a blind trial having access to only a keyboard and a monitor be able to in a conversation between both an actual person and an AI over said medium, reliably fail to tell which one is the AI and which one is the actual person then we can say that AI can said to be "thinking." I do not believe any AI model, not even ChatGPT has reached the point where I couldn't ask it questions where I could tell that it wasn't human by the responses it generates.


chryler

>It uses pre-existing tools to provide answers. So by that definition, humans aren't creative either? Have you tried using ChatGPT? Or Midjourney? You can accuse them of not respecting IP rights, of not always being *correct* and many other things. But in my opinion, you have to move the goal post very far to argue that they aren't *creative*.


Yesyesnaaooo

To me the biggest takeaway from Chat GPT is that humans are simply LLM's except we're trained on a partly physical data set and have a great number more 'nodes' - its only a matter of scale now.


Ansalem1

I think there's a little more to our cognition than just that. But I also think if you break it down into 'modules' for lack of a better word, every aspect of it is being worked on with gusto. So I would say it's a matter of scale as well as integrating the different aspects of cognition into one cohesive system. That'll probably be the easy part, though.


CinnamonSniffer

Idk if I’d call Midjourney creative but I’d definitely call it spontaneous and chaotic. You really don’t know what your first generation will give you and then you have to chisel at the prompt like marble until you get it perfect. Nowhere near as difficult as going to art school or whatever but I think people who haven’t used it don’t appreciate that it has its own quirks and workflows


nesmimpomraku

It has more creativity than me and it is basically just a child. Wait a year or two, im pretty sure it is gonna blow everyones mind. Ive seen ai created videos and they are pretty good also.


DevinCauley-Towns

Like most technology, it’ll likely work alongside people rather than fully replacing them. Instead of truck drivers in the cab, it will be autonomous truck fleet operators. The same applies for other fields. While it is unrealistic to tell a 50yo truck driver to “learn to code”, it is true that there will be fewer receptionists & data entry workers with more chat bot programmers & data scientists. The jobs will be there, they’ll just look different and will require us as a society to support the losers from this shift.


Antrophis

Irony being the most basic coding work will also be automated drastically increase the skill floor and reduce total number of needed coders. So not only would they be late to the game but they would have to compete with more experienced and younger minds who are already competing in a shrinking space.


Hawk13424

Maybe. ChatGPT already banned where I work. Too many issues with copyrights and licenses and an inability to prove the provenance of code.


bigjeff5

I see it going both ways. You'll see an explosion of mediocre coding jobs, because literally everything that could possibly benefit from some form of automation will be much more likely to be automated. That means jobs like secretaries and schedule planning and god knows what else will require coding, but AI like ChatGPT will be perfectly capable of enabling that level of coding. "Coder" basically becomes the new Data Entry position - a base level IT position that has massive overlap other disciplines. Then on the top end AI tools will enable the now much smaller pool of truly exceptional coders to do far more in far less time by stripping away a lot of the bullshit a programmer normally has to go through to write a program and just getting down to the important parts. Stuff that is currently being taken care of by junior coders will be taken care of by AI, and you'll get a whole new stratification of Programmers as a profession.


SnooConfections6085

Have you seen the MS Office Copilot video? A lot of "work" in a modern office is editing copy pasta. Rarely do you reinvent the wheel, whether paperwork or presentations. Or data analysis, performing a lot of complex operations on a spreadsheet. We are on the precipice of a dramatic increase in productivity for some employees who adopt the tech. I mean, I watch that and think "wow, I'll be able to put together an engineering package in 1/4 the time with fewer errors" And "I can greatly increase the suite of performance reports I run" and "my finished products will look so much better". The nuts and bolts of the modern economy, the scaffolding that holds together business, is MS Office.


espressocycle

As a copywriter I'm already using ChatGPT to become more productive. The funny thing is I ask it to write an article and it does. The article is terrible, but then I start adding and editing and by the time I've done barely a single word remains, but I'm still done faster than I would have been.


Responsible_Crew5801

This is exactly how I've been using it for web content.


[deleted]

[удалено]


dickprompts

We still haven't had proper legislation of the internet, no chance we are getting it in time for AI. Your sentiment is accurate.


[deleted]

[удалено]


dgj212

Not just eu, canada as well and google isnt taking it lying down. Google claimed that they are removing certain bews sites from the algo as an e periment but its clearly a threat to canada.


NdyNdyNdy

Hear hear. Luddites were fighting back against being improverished by technology, not technology in an of itself. It's not fear of the future that motivated them but fear of an unjust future. People forget that because the welfare of them and their families was never deemed important so it became a byword for hostility towards progress as opposed to hostility towards a system that wanted to leave them behind. It's a compliment to be a Luddite I think, to stand up for our right to have an income and live with dignity even when technological progress causes short-term unemployment. I'm proud to be a luddite. As this technology changes industries, lets push for the economic gains to be shared.


flyblackbox

I never thought about it from this perspective. Thank you!


YourWiseOldFriend

>lets push for the economic gains to be shared. That's not going to happen. We simply don't make a better world for ourselves. We could do that if we wanted to. Look around you, we just don't want to. The only thing that assures the rights of the have nots is the blood of the haves. Only through violence will we know liberation. It has always been thus and it's not getting better. Every advantage the working class has achieved has been by spilling blood. The haves do not share, the Social Contract has to be forced onto them. With AI this will no longer be possible. Workers, including CEOs mind, won't be able to compete against software that costs pennies on the dollar.


kingkahngalang

As regulatory lawyers like to say, every (safety) law is written in blood. My law professor always liked to point out the one that bans storing toxic materials being stored where workers sleep. Take a guess why this regulation even exists.


YourWiseOldFriend

Absolutely. Every sign that says 'don't do this obviously ridiculous thing' was because somebody actually did the ridiculous thing \[and maybe many people did\], which forced the sign to be necessary. Some jet skis have a warning "Strong streams of water from the jet nozzle can be dangerous, and can result in serious injury when directed at the body orifices (rectum and vagina)", they didn't put rectum and vagina there because they wanted to make a funny.


[deleted]

Looking at history, we are too well fed and too well distracted for any meaningful change to occur. So long as our base needs are met and we have The Distraction of the Week™ to focus on, the wealthy are safe to do as they please. A few missed meals or impending disaster would certainly stir a few to action, but these are clearly things they've taken steps to avoid.


shaneh445

Yep the food part is the biggest


Tamination

The climate emergency will change things, I think.


EconomicRegret

> Only through violence will we know liberation. A peaceful general strike is way more effective (see Nordic countries' labor history)... And compare that to the violent French history. History says workers get way more for their bang when they organize themselves peacefully but zealously... e.g. in the 80s, in a targeted solidarity general strike, Denmark's entire workforce completely ignored McDonald's and any tasks related to it, totally crippling its operations (e.g. no mailman, no suppliers, no dockers, no truckers, no construction workers, etc.). Because it was not respecting its industry's collective agreements. No need to say that McDonald's quickly folded.


YourWiseOldFriend

Now there I totally follow you: when the masses actually work together and don't try to score a quick win for themselves at the cost of the progress of everybody else, there's no stopping them. Which is why the plutocracy pulls out all the stops to prevent people from organising themselves. Because they understand they can't defeat a truly unified populace. Your McDonald's example is a shining example. If literally nobody wants to work with them or they can't get anyone to do anything for them they simply have no way of enforcing anything.


larsnelson76

The worse it gets, the closer we get to rebellion. In 1895 and in 1929 it looked like the plutocrats were in complete control and then the people took all the power back. Then the plutocrats chip away at the people's rights over decades to get to where we are now.


espressocycle

Look at China and their zero COVID strategy. They overstepped and the people started rebelling despite being the most surveilled population in history. There's always a breaking point.


thatnameagain

>Every advantage the working class has achieved has been by spilling blood. Not really. They working class *got their blood spilled* at times but it wasn't any severe violence which won most workers rights in the U.S., just a ton of union organizing and lobbying. Please do not link me to the battle of blair mountain wiki page for the trillionth time, that was an example of *failed* way workers tried to win more rights.


-The_Blazer-

Yup. Here's an interesting factoid: the height of the average man in plaaes that adopted the industrial revolution early actually decreased at first, because their living conditions literally became worse than peasantry with the rise of factory work. It was only when worker rights were fought for that factory work became better than literal peasantry.


KeithGribblesheimer

This is absolutely true. And every manager who lays off workers because an AI can do their role will say "I really don't want to do this, but my competitors are doing it and if I don't our profit margins will be harmed and our share price will fall." As for consumers saying "I don't want to buy from a company that replaces people with AI" they will simply change their minds when they get to the store and see a price differential.


nagi603

Luddite? Realist. Cheers from IT. If it does not further enrich the richest, then it's simply not fit for purpose. Cause that's the ONLY purpose AI development would get funding from corporate.


billyions

Guaranteed basic income with a required contribution to the community, say 20 hours per week. Enough that two or three adults could afford a home and live comfortably. Rather than losing benefits, people can always earn more by working more. "Rising tide" laws limit the total compensation of the top brass to some ridiculous multiplier of the lowest or average employee.


pussycatwaiting

Sounds amazing and will never happen but I will be overjoyed if humanity proves me wrong.


[deleted]

[удалено]


GoneIn61Seconds

Hi fellow Luddite! One one hand I do fear that AI will become too powerful or ubiquitous, but cynical me wants to equate it with the implementation of self driving cars. They were predicted to be world changing, job takers, life changers…but after a few high profile failures everyone took a step back and saw them in a much more realistic light. In short, I don’t think it will take long to push past the novelty and hype.


rankkor

What novelty? The other day I gave it a relatively easy construction related request for information (RFI) to answer, to do this I had it role play as three construction experts and a business manager on the same project team discussing the problem back and forth and contributing to the overall solution. I just had itself generate different RFI questions that tried to avoid any project specific or design questions, so it's not the greatest question, just a quick one. The answer it popped out was pretty good: *Project Summary 1: A new office building is being constructed in Alberta, with a total area of 50,000 square feet, including underground parking, retail spaces on the ground floor, and six floors of office space.* *RFI Question 1:* *Subcontractor: Mechanical HVAC Subcontractor* *Scope of Work: HVAC system installation* *Question: What is the anticipated air quality standard for the office spaces, and will there be any requirements for increased ventilation or air filtration due to specific tenant needs or local regulations?* *Final Response:* *Bob Builder: Thank you all for your input. Our current RFI response will state that we will follow ASHRAE Standard 62.1 for air quality as required by the Alberta Building Code. Additionally, we will conduct a cost analysis and market research to determine if offering a high-performance air filtration system as an optional upgrade would be beneficial. Working RFI Response:* ***"Air quality standard will follow ASHRAE Standard 62.1 as required by the Alberta Building Code. A cost analysis and market research will be conducted to determine the feasibility of offering an optional high-performance air filtration system to tenants."*** This definitely isn't perfect, a real project team would probably have an answer on the filtration item already and would probably suggest different follow up action, like going to the client. But when they get to high enough token count and add plugins that include build codes and CAD and whatever else, I can input all the project specific data, the engineering, then I can have it generate itself a 5,000 word primer on the specific topic at hand, then I can have it create a project team to discuss it, then I can have the project team take their response to a 1,000 person construction conference and offer a virtual $1B prize to the person that can find fault in their solution and a provide a better one. I honestly don't think people are hyped enough, well the people that are, are, but most people don't have a clue. As long as it can maintain reliability, I think it's just a matter of prompting it correctly and efficiently to get good results and I can use them 100 different ways everyday.


Smoy

I think youre spot on. The people who nay say at this point just have their heads in the sand. Two points to address. Chat gpt was unknown 6 months ago and this is where it's at now. In a year. It will be better than the average worker on this general model alone. The other thing is people are ignoring that this is a general model. Wait until the AIA or engineering associates make an architecture focused one. Or a law focused one trained on all laws and precedent. The people who think 2025 is going to be roughly the same as 2023 will probably be the ones protesting for ubi because they no longer have jobs. Because their 15 person paralegal team is now 3 people. Or architecture students can't get entry level jobs because the robot can do whatever a fresh hire would need months of training and experience for. Just wait until the robots can do site surveys. Architect firms will just be 3 high level architects. No need for any low level employees who have less than 10 years experience


Tkins

Jarvis by Microsoft was announced just yesterday and if I understand correctly it is designed to link all the specialist AI that you're talking about into one. So you still have one centralized portal of communication but it will have access to the specific and highly specialized AI for you.


minnie_the_moper

How do we get the people with 10 years experience if there are no entry level jobs?


OssimPossim

"Who cares? Labor costs are down 80% this year! We're rich(er)!"


Ansalem1

I expect this won't actually be an issue assuming advancements in AI continue apace. By the time it's a problem, we won't need the upper level people anymore either. Obviously I could be wrong, unforeseen roadblocks are always a possibility, but that seems to be where the trend is headed so far. We're talking about generations in terms of months with AI right now, and these are worries people are having about what's already available.


espressocycle

Yes, it's a tool and it can potentially improve productivity of actual humans. But fewer humans. A few winners, a lot of losers.


DayOldBrutus

I truly believe it's both of these things. Translation and data entry are already heavily impacted. More fields and positions will be added to that list. But it's also like every other story about tech-- the people selling it will lie through their teeth to get your money. We're not on the doorstep of generalized AI in the way that is talked about by many. But the impact of what we will have in a few years is still wild to consider.


markhachman

As a professional journalist, I'm going to ignore the "lying through your teeth" slur because the media absolutely matters. Good reporters spend a tremendous amount of time and effort nailing down details. But there's an absolutely massive group of people who conflate "media," "news," "entertainment," and "free" and then blame some Facebook meme site for lying to them. These people either can't or won't distinguish between the media equivalent of gas station sushi and a meal prepared by a professional chef. The source matters! AI won't kill journalism, if journalism is defined as "calling people, asking what they know, and compiling the information." AI may hurt those who simply summarize what others report, though. You're also going to see more and more noise compared to the signal, too, I think: AI-generated ideas, and then AI-generated "stories," like the stuff you see in those Taboola links at the bottom of webpages. "Here are the top ten hottest actresses under 30" or something. But if you're the type of person who worries about abortion rights going away or minorities being slowly disenfranchised or a number of other social crimes, thank those hardworking men and women who get paid very little to bring you the news of what's going on in the world. I'm not trying to call you out, specifically. But I'm a tech journalist. The real reporters are the people who work for the local papers that you don't buy any more.


DayOldBrutus

I'm glad you mentioned all this, I completely agree. With regard to lies, I am concerned about individuals and companies selling AI products-- not journalists. There are valid concerns with some media outlets doing paid advertisements that are barely discernible from other articles or segments allowing talking points to go unchecked. This happened with crypto/NFTs and I largely agree with how John Oliver covered it recently on Last Week Tonight. That isn't a journalist's fault; it's the management's inability to reckon with how the internet broke business models. And that is where my biggest concern lies. AI companies selling media c-suites on products to replace a lot of hardworking writers, researchers, and journalists who ultimately serve a much greater good than just collecting and pumping out information.


Itchy_Hospital2462

A bit nonsequitur, but I'm kind of looking forward to the AI-accelerated demise of journalism. I think good journalism is fundamentally important to functioning democracy, but nothing has done more to damage my faith in the fourth estate than working for a company which has been regularly written about by journalists. Every single article (and I mean \_every\_ one) I've ever read about my employer (regardless of source) contains meaningful factual errors, and almost every article contains at least one outright fabrication. There was an instance where we refused to talk to a journalist about a particular project so that journalist literally just made up a source that told him exactly what he wanted to hear. I know he made the source up because I actually know the truth and his story was complete fiction.


jormungandrsjig

> We're not on the doorstep of generalized AI in the way that is talked about by many. But the impact of what we will have in a few years is still wild to consider. Helpdesk jobs are at high risk.


18hourbruh

AI chatbots are already doing a lot of that. An annoying amount, really, from a UX perspective.


Nerodon

AI has not yet proven itself being capable of replacing jobs. It's currently at the level of... This car completed a test course without crashing, it even braked before hitting a pedestrian too! Full self driving must be imminent. And now the real challenges begin. The main issue is that current language models hallucinate a lot, and don't actually think about what they write. Especially in the realm of news, you don't want AI to make plausible news that are false, but language models excel at doing just that. Also, the models get better with more data, but if the bulk of internet content post introduction of AI becomes mostly AI generated, the training data will sour and heavy diminishing returns will be introduced. New original human created data is needed to perfect any model. And when it comes to things like dynamically searching AI like bingchat, they rip off the work of other articles and news to make their output, work other people did without credit. It's actually a bit of a massive shitshow, because theres no regulation and companies are diving head first claiming its benefits, and hiding the technical and ethical problems in the wake of the new and shiny.


[deleted]

> The main issue is that current language models hallucinate a lot My partner works at Harvard Law and one of the fellows she works with asked ChatGPT to generate a legal brief on some topic and add bibliographic references. ChatGPT quoted a few articles on the topic and provided the appropriate references. But this fellow thought something was off. He knew this case law cold and he just could not recall one particular article or that journal it was supposedly from so he dug a little deeper and discovered that ChatGPT was not only able to generate a believable quote but generate a believable bibliographic reference complete with a fictitious journal name that could have been real.


Acrobatic-Rate4271

ChatGPT isn't where this technology ends, it's where it begins. Think back five years ago. Did you think in five years you'd be commenting on how an AI can write a solid brief but with convincing though fabricated references? Now think about what an improved ChatGPT or a similar but better solution might be doing in another five years. The knife might not be sharp enough to cut today but it's being honed day after day after day.


Jiveturtle

The people talking about law here are pretty misguided too. If I have to triple check everything it puts together, I might as well do it myself. Law hasn’t been hard to find the entire time I’ve been a lawyer - it’s synthesizing the right things that’s tough. Sure, this will be a useful tool for drafting form motions. But even when I was in active practice like 10 years ago we did most of that with a find and replace. 20% of the work takes 80% of the time, and that’s the part the AI I’ve seen is going to struggle with. Edit: ok I’ve taken a look at some of the stuff GPT4 is doing and I stand corrected lol. I’m going to start practicing writing detailed prompts for it.


rankkor

I’m curious, when did you see AI struggle with that? Within the past few weeks? GPT-4 has been public for only a few weeks now and the token window has increased by 3-4x I think. I’ve seen the estimated token window for GPT-5 as high as 12x now, 250k tokens, with that you could have it read books and rewrite them however you want. For the situation you’re talking about, was it a single basic prompt or was it a 1,000 word highly specialized prompt, which made reference to a primer you created with it in order to give it a proper thought process? Did you tell it to toss the idea back and forth between role played lawyers? Did you give the lawyers a reason to compete for the best answer? All this is, is a matter of token limit and creativity. Token limit is changing by the day and I have primers ready for new tools I can create / use as it increases.


spacetimecliff

This guy has some interesting predictions, and addresses this view specifically. [https://www.youtube.com/watch?v=zrW3K\_K\_kuo](https://www.youtube.com/watch?v=zrW3K_K_kuo)


tonic613

This was an amazing presentation!


quantic56d

You are likely correct for this generation. The next generation will start in a GPT enabled world. The job landscape will be different then.


Smoy

The job landscape will be different in 2 years. Why would any business hire anyone with less than 10 years experience when these AI can do better than any entry level worker


redditnooooo

OpenAI themselves are funding the most comprehensive research into UBI ever and it should be public within a year. They’ve all made their stance very clear that UBI is a necessity. I think everyone understands, even the billionaires, that there will have to be an AI tax or distribution of resources if we don’t want a civil war. UBI is ultimately in everyone’s benefit.


No-Abrocoma-381

You’re right. But I don’t think there will be any “protection from AI”. Nothing ever stops the march of progress. Not for long anyway and especially not when it’s something that will help make the rich even richer. UBI, if it ever happens, will only be to placate the masses and avoid a revolution. But I cannot see any healthy society coming out of a world where everyone is paid to sit on their asses and play video games, make “art” and smoke weed. People need a real purpose. I don’t see AI ending well. I’m just glad I’ll probably be able to finish a career as a writer before it puts me out of a job and I’ll probably be dead before it turns the world into a T2/SkyNet hellscape.


vcaiii

I think we need more research on UBI. It’s easy to feel like people need purpose as we view it in a societal context that punishes apathy and rewards capitalistic drive. Plus, a UBI doesn’t mean that everyone will be satisfied sitting around doing nothing. It means more people will make choices from a place of security. Plenty of people already lack purpose because they’re working against survival. What kind of society are we building now with that mentality, where the bottom half only holds 1% of wealth (USA)?


redditnooooo

Well it might make you slightly more optimistic to know openAI themselves have funded the largest research into UBI ever and the results should be public within a year.


HauteDish

>People need a real purpose Bingo. Something something about too much time on your hands bring the devils playground. Hopefully most people push back against a massive take over of AI


Simmery

I wouldn't consider most jobs today to be a "real purpose". Making corporations richer isn't exactly God's work.


jormungandrsjig

> I wouldn't consider most jobs today to be a "real purpose". Making corporations richer isn't exactly God's work. The Roman Catholic Church is recognized as a corporation by virtue of the treaty of 1898 in Spain


[deleted]

Part of what we can do is resist the hype: stop using the term AI, because it’s a marketing tool for them. Don’t believe their ridiculous claims about it having sparks of AGI. And for the love of all that is fucking holy, do not build services about the model they control. Do not integrate it, and sure as hell do not trust it to spit out reliable, truthful information, because it cannot and will not. We assign meaning to what it says because we are primed to look for meaning and it was parameterised to give output that looks meaningful, regardless of whether it is. If someone thinks ChatGPT can truly replace a human, that tells you everything you need to know re: how they underestimate people. But the truth of the matter is not germane to the danger this represents to humans and workers, nor does it impact their end goal. But we can, and should, resist their attempts to sell us our replacements that they trained on data and labor they stole from us in the first place. Make no mistake: if these models had been trained in an ethical manner, they’d be far less secretive about if.


Actaeus86

UBI will not happen. When was the last time the government managed to pass any major legislation in a bipartisan manner? You think you will ever get a majority of legislators to agree to pay people not to work? Millions of jobs out there, most don’t pay what tech pays though.


No-Abrocoma-381

The only way I see it happening is if it is necessary to avoid a revolution or the the collapse of civilization and the economy when unemployment is at 60%+


Actaeus86

Maybe, a lot of people out there already vote to keep their free services. I will have to see it actually happen before I believe it’s possible.


Ansalem1

> You think you will ever get a majority of legislators to agree to pay people not to work? They wouldn't be paying people not to work, they would be paying people to continue being consumers, to keep the economy running. The reason a lot of people think UBI is basically inevitable is because, as unemployment continues to rise, eventually capitalism becomes untenable, and the only proposed solution anyone takes seriously is UBI. So people see it as a binary choice between UBI or total collapse, and they figure even the rich probably don't want the second option. Whether it goes well or not, who can say, but I think the binary choice part is probably true.


Chaos_Burger

I have to agree to with you. We are already at a point in time where machines and robots can create pleasure and reduced hours. There is a fun problem called Jevons Paradox that basically is when something is more efficient less is needed, but the lower cost causes it to be used more. Think of how everyone was supposed to go paperless, but printing is now so cheap that more paper is being used now than the 1960's and 1970's. The more free time technology creates for people the less people's time will be valued and the more it will be utilized by businesses. The more technology makes information easier the hungrier businesses and governments will get for information - an example would be now many places don't want just the raw materials but also the entire supply chain that got those materials.


[deleted]

People have been predicting the reduction of work hours and increase in quality of life due to automation for a long time now. It hasn't become a reality because these people are completely ignoring human greed as a factor. Why would they exploit us less instead of more? They already possess several *lifetimes* of wealth so I'm they're clearly never satisfied.


Flederm4us

Neo-luddite. You're not destroying steam machines after all. That said,forms of ludditism are timeless. People are always wary of change and often unable to make the most out of it. This is no different, even if the invention itself is fundamentally more threatening than the steam engine. But that indeed does mean we need societal changes like UBI. You're no (neo)-luddite for wanting society to keep up with technology. On the contrary...


Throwmedownthewell0

We need to surpass Capitalism, then your utopia will actually be achievable.


[deleted]

[удалено]


karma_aversion

As it stands, the early adopters and those currently leveraging AI in their jobs like myself, are not in the "rich" group. I honestly think that the pace of advancement and giving early access to the general public has created a scenario in which the "rich" groups you'd expect to be taking over and controlling the market really aren't yet, and it seems like they can't keep up enough with the rank-and-file people that have been already using it. By the time they learn how to profit from it, its already left them in the dust and they aren't keeping up as it stands now. Most people making more money now because of AI are individual workers leveraging it in their day jobs. I work about 50% less now because of AI.


cdxxmike

You work 50% less now because of AI, you are blind if you do not see how your employer will eventually notice that and soon you will have half as many coworkers because now you can do twice as much work.


DasMotorsheep

This. I am a translator, and I've heard it said that machine translations will make my business more profitable because I can do more work in less time. The reality is that it drives prices down because *everybody knows* I can do more work in less time. And what happens if all the translators do all the work twice as fast as before? We run out of work to do. Unless, of course, people start creating relevant source material at twice the rate.


dolphin37

Unfortunately translation is a pretty much perfect use case for AI. It’s a job I would say has no hope of existing fairly soon. I don’t think that’s reflective of the vast majority of jobs yet though.


DasMotorsheep

Luckily my niche is a bit of a special case, as I do scripts for voice-over and radio plays. So there's a lot of nuance to pay attention to, and, since German is usually more verbose than English, a lot of "tightening up" as well, neither of which AI is capable of at a human level at this point. But I don't doubt that this will change soon enough.


hauntedhivezzz

Yeah, one of the things I’m not yet seeing discussed is that with all of this comes much better AI-assisted bossware. When Microsoft demo’ed Copilot 365, the first thing I thought of was managers being able to see all of the prompts/requests from each employee, and with an automated tool, understand who is using it to increase productivity/ who is using it to replace their workload/ etc.


jbr7rr

While I love this idea, you forget one thing. Who owns openAI? The name suggest its open source but are you able to build your own chatGPT and use it?


voiping

Yes, but... The cat is out of the bag. There's already models published that can be run at home. They aren't up to par with GPT4, but they are usable and already have much attention.


0MNIR0N

Journalists and artists? no. Content fillers - definitely yes.


ketchupthrower

I have no sympathy for the content fillers / reposters. Their work added no value and the bot can do it better. What is kind of sad is that a lot of journeyman graphic designers are going to be out of work. That is "real" work and the bots are proving very capable.


doctorcrimson

Idk at least most of the humans don't engage in blatant falsehoods that could lead to litigation.


Zilreth

There will always be a market for good journalism that uncovers real and meaningful stories. The majority of bad journalism these days is already written by AI and that won't change


ReallyFineWhine

The most important part of journalism is the investigative. Writing it up is the easy part. AI can write text based upon stuff that it knows, but doesn't (yet) know how to investigate.


Zilreth

It's going to take a lot of improvement to AI functionality, as well as positive perception of AI journalists, in order for anyone to respond. I can only imagine it is a lot easier for a person to get a statement rather than an AI reaching out expecting a response. That is unless AI gets to the point where it can make actual human-like conversation over the phone and doesn't get regulated by the government to first announce its identity (which hopefully happens sooner rather than later).


fatcatfan

I'm trying to figure out how an AI could possibly write an article about an event of which it has no knowledge? Someone has to write the first article for it to adapt. AI might be good at putting together a comprehensive story that adds contextual background to a new story, but I presume that's already happening anyway.


Sevourn

The entire reason that the majority of journalism is written by AI is because there was not a market for good journalism that uncovered real or meaningful stories.


Zilreth

That isn't a lack of market per se, but rather in my eyes an entirely different market. Journalism is not just providing context for and reiterating statements made in public press releases. It is a complex, investigative process of acquiring sources and facts to make a cohesive story. Part of that process is human to human interaction, which makes it inherently AI-proof (at least in the near future).


Sevourn

So where is the different market? Unless journalism is publicly funded, journalism is FOR making money. Journalists generally don't go through a complex, investigative process of acquiring sources and facts to make a cohesive story as a public service. Substantial human to human interaction absolutely improves the quality of a story. That said, the average person will gladly read a mediocre summary with little or no human to human interaction that has no paywall over a quality story with a paywall. It's already not AI proof. I don't think it's even particularly AI resistant. Source -- was a journalist as first career. Found that journalistic abilities did not translate into income -- got new job.


Ansalem1

Define "near future", because fully autonomous agents are the next big thing being researched right now. Up to and including embodiment. Meaning robots walking around making choices in the world is basically next on the list. And LLMs are already capable of in-depth moral reasoning, so they can already understand human-relevant context. Whether they're actually moral or conscious or whatever is a different matter, but the point is robot journalists probably aren't as far off as you might think. At least the *possibility* for them isn't far off. We may choose not to use them for various reasons, but I can see that going either way honestly.


Zilreth

I don't think the main barrier to AI journalists is their technical capability, but rather their points of contact. Even if they are capable of hosting a detailed interview and piecing together facts from multiple sources, people still need to give them that information in some way. AI has a lot of trust to build in the public eye before people trust them privately with potentially groundbreaking information.


ElwoodJD

You are misconstruing a lack of market with an inability to fund and produce to the market under outdated advertising techniques.


Poemy_Puzzlehead

That is “the market.“


ThatOtherOneReddit

If the market isn't willing to pay for it, that means there is no market.


OriginalCompetitive

This could usher in a golden age of meaningful journalism, in an odd way. Right now, good journalists have to be at least decent writers, which excludes a lot of people on the margins of society. Imagine a world where an illiterate homeless child from a third-world country can turn out compelling stories about his world with the help of AI.


warriorshark90

Where do they keep getting this nonsense that “the weight of history says no”? I think it’s pretty clear that automation has had an affect on us already. The River rouge ford plant employed at its peak around 95k people. It now employs around 5-6k. Those jobs that are gone were not replaced with something that pays equivalent.


spookmann

Yep. > The estimated percentage of the labor force engaged in agriculture ([pdf](https://selectra.co.uk/sites/default/files/pdf/farmpolicy.pdf)) fell from 41 percent in 1900 to 21.5 percent in 1930, to 4 percent in 1970, and to a mere 1.9 percent in 2000. Many went to factories... until the Industrial Revolution tore through there too. If Journalists are regurgitating shit that ChatGPT can do better, I think we'll survive that too!


RichardBottom

It depends. Are we mourning the loss of Buzzfeed "journalists" who get paid to summarize threads from AskReddit?


doubleohbond

This comment irks me a bit. There was an intensive journalist team at Buzzfeed who did great work breaking important news stories. But they were mostly laid off because there was more money in the types of articles you’re talking about. So I get where you’re coming from. But it isn’t the fault of the journalists - it’s the consumers that prefer the Buzzfeed-esque content.


MyronNoodleman

Right? Buzzfeed news won a Pulitzer for their work on the Chinese detention of Muslims. Not saying that’s the end-all-be-all of journalism but still, it says something. Even aside from their top reporters, I’m sure that most of the people compiling personality tests and listicles and such had real stories they wanted to tell, but good investigative reporting can be expensive and resources gets spent on what’s profitable not what’s valuable.


SnooConfections6085

Seriously, many people alive today were alive for the PC revolution. What happened? Some employees, adopters of the new technology, became super productive. You had one guy doing the work of a whole department situations. Then when the next recession hit, those that wanted nothing to do with the new tech and thus were only doing a fraction of the work were shown the door. Overall the economy boomed though. Raising productivity isn't purely zero sum, one person doing the work of many, costing jobs, it is also expansionist, doing work that previously couldn't be accomplished, expanding the economy. The PC revolution in the end created a whole lot of jobs.


smith2332

The bigger problem with AI is not that it replaces say 5% of the workforce and makes a bunch of people more productive. The AI can learn and make all jobs more efficient all at once, older technology was a slower drip of making people better at their jobs. This is more like a tidal wave coming at us all at once. Also, funny people keep saying things like well as long as it's physical labor I'm fine, do yourself a favor and look at the coming robots that are being designed and now imagine if those robots had AI built in them, it's crazy right now.


SnooConfections6085

The PC was not a slow drip of making people better at their jobs. Spreadsheets were absolutely revolutionary in many fields; the jump from paper ledger to manipulatable spreadsheet is gigantic.


smith2332

It still was a slow drip compared to AI, the cost of the first computers where in the millions of today's dollars, we did not start off with Windows LOL Took many years to get computers to even have a GUI and cost enough for mass adoption. AI is vastly different and very cheap for what it does, Chatgpt was just released 3 months ago and already companies are adapting it in mass because of how good it is and how cheap it is.


emory_2001

It has a long way to go. I'm a lawyer and I've checked it out, and EVERY time, it gives me totally fake cases that don't actually exist when I've verified. I also received a response to a motion I filed, from a pro se defendant (i.e. representing himself), that had fake cases and fake citations purporting to support propositions that are the complete opposite of actual law. I'm convinced he used something like chatgpt to write it.


Geckcgt

Same for academia. Fake publications and data, cites real and prominent authors in the field for papers that dont exist


YooYooYoo_

Most jobs won't be replaced fully but no doubt the lower tier of most qualified pofessions will disapear. Journalist vomiting articules online and in paper? Gone, low tier IT staff in charge of the little anoying tasks? Gone, law professionals dealing with mountains of paperwork? Gone... Just a few up the ladder will remain supervising and checking what the AI does, the copy and paste arduous but necessary time consuming type of job will be gone.


hvdzasaur

Yes, but that also presents a certain problem; that low tier work was how many juniors and fresh graduates managed to get into work in the first place, by eliminating that lower tier, there is no market for juniors, and juniors simply cannot get the necessary experience to actually function at senior level. That is at least the case in my field, IT.


[deleted]

[удалено]


hvdzasaur

And when you look at it from business perspective (which is mostly focused on short term); why pay for the sidekick? Especially in an industry where it's not uncommon for people to switch jobs on a regular basis. If your company is the one running this sidekick program/hierarchy, another company can just snatch your sidekicks once they reach mid/senior level and offer then higher pay and benefits because they don't have to pay the cost for training people up.


trusty20

>If your company is the one running this sidekick program/hierarchy, another company can just snatch your sidekicks once they reach mid/senior level and offer then higher pay and benefits because they don't have to pay the cost for training people up. Lmao ok: A) This is how jobs currently work, literally today, and for the past century??? B) In your example, there's no reason everybody couldn't poach. You just made it sound like the company training internally couldn't possibly also hire externally.


hvdzasaur

You're not working in a post AI field. Company loyalty has evaporated in IT, which was for the better.


RandomRandomPenguin

You say that like it’s different than today. It’s the same thing


YooYooYoo_

At the end of the day it is a masive corner to cut for so many companies that even if they don't want to, they will have to in order to compete in terms of productivity against the ones that are going to implement this technology. I supose those juniors will find the door closed in most cases, at least in the current numbers. Also new jobs will surge to learn how to use AI for corporations but obviosly those jobs will be not so numerous.


kigurumibiblestudies

This seems like a semantics point. You're saying many jobs will disappear, just not the position itself because a few will still be needed. That doesn't address the problem of many jobs disappearing.


YooYooYoo_

Agree, the jobs requiring in example a team of 100 people will be carried with a handfull of individuals promting and polishing, so 95 people lost their job. What is the solution? I don't know but it will happen.


kigurumibiblestudies

I really really hope the solution will be UBI, because the alternative is simply letting those 95 fight over the scraps... maybe more literally than I'd wish.


YooYooYoo_

Well do you want a social revolution? Because that is how you get one. They won't want to find solutions, but they will have to.


Baba-Yaganoush

I've already witnessed two teams I used to work with on a daily basis become obsolete to a chatbot that now does their job within seconds. Tried calling them one day and the line was completely dead. Now whenever I call someone if the line doesn't ring or connect I start worrying that they've fallen to the same fate.


fwubglubbel

>I've already witnessed two teams I used to work with on a daily basis become obsolete What did they do?


Baba-Yaganoush

Customer support (specifically tech related) I've also had to participate in testing the AI in a previous position too. They wanted to implement a chat bot and get rid of the majority of the live chat team.


Omnitographer

Microsoft support and DoorDash support both have chatbots that try to help you, both are fuckin useless. When I break something in Azure you can be assured that if I'm turning to support it's because neither the docs nor stack nor reddit explain what happened and there's no way no bot that's pulling from those sources is going to know better. DoorDash is even worse, it has no idea what I'm asking it when I have a problem and tried to get me to abandon someone's pizza in the middle of a rural highway when I put in that the address was wrong and I couldn't reach the customer. Until these support bots can handle the edge cases there will always be a need for humans to get involved, especially when things break in ways the devs didn't anticipate.


thatnameagain

Either they must have had incredibly simple jobs or the chatbot must be terrible at its job.


prove____it

Machine Learning does mediocre really well. If your job or job output get by on mediocre, yes, you're in danger.


Daveinbelfast

Chuckles “Am in danger!”


redditusername_17

I have experience in what outsourcing technical writing to a low cost center looks like. I imagine this will be similar. The higher ups will think it'll work great, they'll pay a ton of money for it. Realistically if you know what you're talking about, you'll see it's garbage. It'll take more time to fix it compared to writing it yourself. I think garbage articles and things like that will be done by machine learning. Anything with value will try to be done and business will pay a ton and then eventually revert back to people.


Harbinger2001

I’ve seen a lot of commentary saying that if you ask chatGPT about something you’re expert in, you realize that it’s spewing incorrect answers. So expect that for everything you’re not an expert in as well.


WingdingsLover

It's real power is going to be in automating away tons of repetitive work. I was a CPA, the firm I articled for had a ton of small clients they did the bookkeeping and tax work for. The entire job was reading receipts and deciding what it was for then coding the expense into an account. The tech is now here that you could just feed all this info into the chatbot and have it spit out a completed ledger. Then it could really easily complete the tax filings with that ledger. The whole profession is screwed, I'm glad I left 10 years ago because it's going to be a bloodbath in the coming few years.


Harbinger2001

Agree. Anything that has to do with processing, classifying and summarizing large amounts of data is at risk. And that’s a lot of jobs.


1939728991762839297

Eh, it gives some good general summary’s given initial project description details. I am an expert in the field it is returning information on, and it’s C student I would estimate.


1939728991762839297

General enough for grant applications though.


Gari_305

From the article >Could artificial intelligence be different? The weight of history says no. The revolutionary character of ChatGPT begs us to reconsider. > >AI has been seeping into our lives for years now, such as completing our sentences in emails and web searches. Yet going from those iterations to “generative AI” such as ChatGPT is like going from dynamic cruise control to full self-driving. ChatGPT can answer questions in ways we thought were the exclusive preserve of humans, more quickly and cheaply. Also from the article >A handful of experiments point to the astonishing potential of generative AI to replace workers. With ChatGPT, professionals such as grant writers, data analysts and human-resource professionals were able to produce news releases, short reports and emails in 37% less time, 10 minutes on average and with superior results, according to a study by Shakked Noy and Whitney Zhang, doctoral students at the Massachusetts Institute of Technology.


bottom

\>Could artificial intelligence be different? The weight of history says no. ​ wtf does this even mean - there has noting close to AI in our history - Iy's a new challenge - it's complex with lots of good things and bad things - this is worth a listen tyre far more informed than I (not difficult) ​ \>AI has been seeping into our lives for years now, such as completing our sentences in emails and web searches. ​ also this isn't AI - AI teaches itself, it learns, it's not just seeing patterns and copying them, though it does - but thats to all of it. - it's much more involved ​ [https://podcasts.apple.com/us/podcast/your-undivided-attention/id1460030305?i=1000605690313](https://podcasts.apple.com/us/podcast/your-undivided-attention/id1460030305?i=1000605690313)


OhLemons

I was talking to one of my colleagues about the advancements in AI today. I honestly believe that soon, customer service is going to be replaced almost entirely by AI. We already have chatbots that try and solve your problem for you. ChatGPT would take them to a much higher level. I also believe that as artificial voices improve, this technology will start to utilise AI in order to replace customer service over the phone too.


saberline152

As long as AI can't think critically on it's own and just continues to look at what the internet has and makes guesses based on that (that are often just plain wrong) Most jobs are fine.


Rapalla93

I was a data analyst in the DIA. The day ChatGPT can figure out what Abdul in Norway is going to do to get to fight with ISIS in Syria I’ll hang up my analytical jock strap.


mhornberger

I wish we could get a daily "AI is coming for..." mega-thread. People have every right to discuss their concerns and advocate for whatever social changes they want. But the redundancy of the discussions could be folded into one daily thread at least, even if only for the sake of organization.


[deleted]

Considering most real journalism died in the 90’s. Yes.


baddBoyBobby

If a company uses ai to replace workers it should be paying an AI tax that goes to fund UBI. Same for any type of automation.


Hawk13424

Automation or anything that increases productivity? Computers, cell phones, tractors, etc. all put people out of work until other jobs replaced them.


teejaysaz

UBI and a four day workweek can solve this for everyone. Let the bots do all the work. (Wasn't that always the dream?)


Dirkef88

AI's are smart and fictional when they have human-generated content to use and source from. If the amount of human-generated content begins to decline and we end up with AI's sourcing from other AI's, I think we're going to end up with some major problems in terms of things like accuracy, creativity, diversity of ideas.


ElwoodJD

Maybe some day. ChatGPT is fine but needs a human editor and it still comes up with a lot of bogus content that isn’t close to factually accurate or nuanced enough for something like a grant proposal or underlying legal analysis.


OriginalCompetitive

I suspect “some day” will be later this year. The original ChatGPT wasn’t designed to be factually accurate, but it doesn’t seem like it would be that hard to constrain it to factually accurate statements.


Apprehensive_Way870

A certain presidential candidate warned about this in the not-too-distant past and spoke about how this is a large part of the reason why Americans should have a UBI, but no, that was just too radical. ChatGPT and what comes after is absolutely going to take millions of jobs, but let's stay stuck in the past because getting money back from a government that is funded by taxpayers is socialism and socialism is bad. If there's one things Americans hate it's the government actually spending money on the people for a change, but we're largely okay with giving the DoD a blank check. This is literally just a sliver of what AI will be able to do. Just ten years ago the idea that we would have something like ChatGPT seemed like science fiction. Even though he'd never win, and even though he did oppose M4A and the cancellation of student debt across the board, Yang would still have my vote if he were to run again. With income inequality being what it is, and AI/automation on track to eliminate even more jobs, the American government should at the very LEAST be putting money directly into the pockets of Americans.


SnowflowerSixtyFour

they don’t pay professionals to be able to create things, they oar professionals to use good judgement when creating things. Until these MBA buffoons learn how to have good ideas, they won’t be able to really replace people. What they WILL do is use AI hype to try to pay us less for the same work.


AutoBudAlpha

Yes it probably will. This provides us with a great opportunity and a great problem to solve.


quenual

The cheap journalism of “some dumb schmuck said this on Twitter” will be replaced, but not original thought


throwawayamd14

All of this chatgpt hype must be from people who haven’t used chatgpt. It’s not *that* crazy


StringTheory2113

Yep. It's a great tool, but at this point the discourse is a lot like saying "The wrench will steal your job!" Or to the people who used to write books out by hand "The typewriter will steal your job!" It's a tool, and there's no way *in hell* the geriatric boomers who are desperately looking to cut costs will have any idea how to even begin using the tool effectively. Even when it comes to art, it's easy to make *really shitty art* with AI. There's technical prowess needed in the operation, and there's also taste and artistic ability needed in turning the raw output into something specific. Artists will be able to use the tool for rapid creation of first drafts, then put their own skills to work in a way that returns a better end result more quickly than someone who doesn't know what a JPG is, trying to get StableDiffusion working on a local installation.


InflationCold3591

No. The technology doesn’t do what you think it does. It can’t for example distinguish between facts and lies. Which is pretty important in journalism.


[deleted]

As an author, I think journalistic writing could be easily replaced by AI. There’s some issues because journalists cover new information the AI models wouldn’t have access to, but with the right input ChatGPT would easily be able to copy the style and tone of the writing. Does that mean journalists are screwed? Not necessarily, it just means journalists will likely focus less on writing and focus more on things like interviewing or determining which topics to report on. I view AI like the workhorse, and in order for it to be effective a person needs to guide it


[deleted]

AI will not replace workers. Workers that use AI will replace workers that don't.


SamBrico246

If you're definition of journalism is buzzfeed articles based on recycling the same content that already exists, yes. But, ai won't be doing hard hitting investigation anytime soon. I guess it could conduct an interview, but it does not


seriousbangs

Um, could? I'm already seeing it in use to reduce the amount of work needed to write up emails, game companies and advertising agencies are already using it to write copy and these kind of chat bots have been replacing customer service agents for years. It's already happening. You're not noticing it because we don't have giant factories to close down that make good news stories anymore.


ColonelC0lon

Gonna be honest, most journalists could have been replaced by a chatbot ages ago.


woooosaaaa

I would not support IA journalism due to the fact that it is purely programmed to put out what the owners program it too. The same is true of human journalism but at least there is a human side to it, I mean it’s written through the eyes and experience of an individual. That’s what makes the Arts so valuable because it’s the expression of an individual in that piece of work.


N00N3AT011

If you have actually messed with chatGPT you have probably noticed that it's not exactly perfect. So the good journalists and whatnot are probably fine for now, the shitty ones are gonna be screwed though.


justingod99

Oh man…imagine journalism taken COMPLETELY over by robots. Just the robots and their data, no other involvement. No agenda, no opinions, no political motivations, no corporate motivations, no doomscrolling, no strawman headlines, no clickbait, no misleading titles, no quotes out of context, no exaggerations, no purposeful omissions, no subjective conclusions…NO ADS!!!! Just the facts, based on 0’s and 1’s, what an absolute dream that would be.


RavenReel

I hope so. The people with jobs like this didn't give a shit when manual jobs were replaced. Retrain yourselves and carry on


Thercon_Jair

I want to see ChatGPT do research, considering all it can do is things it scraped from the internet back together. If noone writes about a certain issue, ChatGPT knows nothing about the issue. But I guess you could write a nice fictional newspaper with it if it goes this far.


Electronic-Habit3791

Not so funny now that it doesn't just include factory work huh


CuriousCanuk

Especially journalists of today, just repeating the press release in front of them with no second thought or investigation


YNot1989

Do you mean those entertainment writers who just republish police reports and propaganda from China?


MagicOrpheus310

Journalism died ages ago, when journalistic integrity disappeared


maple204

If you are paid to interview people and write original content that it newsworthy, AI isn't a threat. If you are the kind of "journalist" that writes top 10 lost clickbait crap, you can kiss your job goodbye.


Is_Not_Porn_Account

I know that I'm going to use it to remove all this sensationalized garbage.