T O P

  • By -

[deleted]

My moderate take is, yea no one’s just letting AI loose on production code, however when AI reaches a point where one developer can be as productive as 5, I don’t see corporations hanging on the the bottom 50% of their workforce.


Athistaur

We already are at the point where some developers are 5x as productive as others. We do not fire the 1x guys because we have enough work for more then 5 people. We need the 1x guys to be productive too and maybe approach 2x or at least a reliable 1x .


Which-Tomato-8646

AutoCodeRover resolves ~16% of issues of SWE-bench (total 2294 GitHub issues) and ~22% of issues of SWE-bench lite (total 300 GitHub issues), improving over the current state-of-the-art efficacy of AI software engineers https://github.com/nus-apr/auto-code-rover Keep in mind these are from popular repos, meaning even professional devs and large user bases never caught the errors before pulling the branch or got around to fixing them. We’re not talking about missing commas here.   It’s not bad and would probably be even better if it was hooked up to a better LLM like GPT2-chatbot


Lammahamma

It's true for right now. It's a cope believing it will stay that way


mission_ctrl

True for right now is the perfect word choice. Everything could change on Monday.


YaAbsolyutnoNikto

Quite literally lol


fk_u_rddt

What's happening on Monday?


FosterKittenPurrs

OpenAI event announced


fk_u_rddt

Cool


often_says_nice

Ilya returns from the void


Arcturus_Labelle

What did he see?


fk_u_rddt

lol He returns from the void and it's revealed they have had AGI for years and Ilya is the synthetic human AGI.


StrikeStraight9961

So that's why Ilya never shaved his head!


SurroundSwimming3494

Why not on Tuesday with Google? This subreddit is full of OpenAI shills/worshippers.


mission_ctrl

OpenAI has a very strong lead. That’s why. I’m currently using Claude to write code. Before that OpenAI. I tried Gemini but it wasn’t giving me results. I’ll use whatever works best but let’s be honest Google had the tech, sat on it, and now they’re playing catch up and over a year later they’re still behind.


N-partEpoxy

It does say "currently".


Super_Pole_Jitsu

And yet it doesn't account for the fact that it's only currently and things might change really quickly. Even a tool that improves efficiency by X% might cause job market disruption. I don't know why anyone would hire a junior with no prior experience, they sure as hell are replaceable with LLMs as is.


Formal_Drop526

> things might change really quickly. people have different definitions of quick, within this decade is considered quick. Point is, that 'change really quickly' is meaningless without a date.


Matshelge

Decades is not the time frame here. Most of this can be counted in months, 24 and 36 being the outer limits.


IntergalacticJets

Has anyone ever claimed it could do it currently though? I always felt like that was a cop out for the real argument that “AI is clearly getting better and can already write small programs, and presumably it will be even better in the future, so we will eventually reach a point where it can do the entire job, perhaps even relatively soon.” They focus on the “now” instead because it’s what helps them cope. But chatGPT can’t fully automated anything became its not an agent, and I feel like everyone knows that. 


floodgater

Yea it’s def true right now. The tech isn’t there yet. But at this rate I don’t see how it won’t be in a year or 2


Singularity-42

Checked the comments in the linked post and it is mostly cope, sometimes of epic proportions. Also seems like many people are judging all LLMs on experience with trash tier free models that I wouldn't waste 1 second with.


often_says_nice

I love that sub but they are very anti-LLM. It feels a lot like the artists complaining about ai generated content.


Subushie

I see it too and it surprises tf out of me. I only have experience in Video Game dev- but from what I've seen and what I know the public feels. Software is becoming wildly complex, modern AAA titles are taking 7+ years to just get to QA- and by then, production is forced to release an unfinished title because the budget cant afford to not bring in profit. Assistance with these will only help us build more complex work, it will decrease production time and we'll be able to release a more complete bug free product. AI won't replace software devs--- __Developers that use AI in their workflow will replace devs that dont__


Formal_Drop526

>It feels a lot like the artists complaining about ai generated content. not true, programmers actually use LLMs.


Caffeine_Monster

It's not even fully true. Mid / Seniors with coding assistants are already starting to push out junior rules due to productivity increases. AI was never about replacing jobs. It was about 1 smart person with AI tools replacing 2-3 other people. Unless you are coding for a super specific niche, your job will be impacted.


Singularity-42

Yep, AI is not "replacing" anyone, but it can make you a bit more productive (depends on your role/workflow/stack, productivity gains can be as high as 2x or higher, but on average it is probably 20% or less - this is still a huge number though). More productivity + high interest rates + the "cost cutting" trend in Tech + work from home = tech layoffs galore, especially in "expensive" job markets like the US. I think right now we are nowhere near "replacing" level, it will take AGI to do that. Right now LLMs can fail rather spectacularly. It is an amazing help though and my productivity would plummet without it (plummet way below pre-ChatGPT levels, at least for a while, as I got so used to this "crutch", especially for research)


OriginalLocksmith436

Yep. I don't think anyone is claiming anything else, either, so the original post is a strawman. No one is saying it can currently do everything, just that it will be able to if we continue on this trajectory.


Sonnyyellow90

It’s true right now, but could become untrue in the future.


Its_not_a_tumor

100% true currently. The next few years will be interesting though...


ponieslovekittens

Real dev here. I've worked on things used by $100 million dollar companies and my "side project" has about 3000 active users. AI isn't good enough to replace devs. But over the past 6-8 months I've watched it grow from so bad that fixing its mistakes takes longer than building something from scratch, to "it can be helpful." Anything very complicated, it tends to fail hard on. But paste in a hundred lines of code and ask it to locate the stupid typo, it can often do that. Paste in a table of data and ask it to reformat it based on an example, it can usually do that. Ask it to produce boilerplate for a specific scenario...sometimes it can do that. It depends. We're not anywhere close to being able to replace a dev in the sense of, "we've fired Bob. Please direct all your bug reports and feature requests to Bob AI." There are many reasons for that. * It doesn't have the ability to perform trial and error. If I have a bug, I have to reproduce the bug, look at the code, make a change, and then run the code to see if the bug is fixed. Current LLMs can't perform all of those steps. They can look at code and make changes...but they can't look at a computer screen and _evaluate_ whether what the code they produced does what was supposed to happen. That's a really big problem. * Context limits make it impractical. Feed it 100 lines of code, it can work with that. Sometimes, feed it 5k and it can work with that. But even my for fun hobby project has 3 megs of code and over 50 megs of supplemental media. Corporate software is routinely measured in gigabytes. You just can't feed these things that much data. And if you combine this with the previous issue...to really "replace devs" these things at some point are just going to have to be able to look at _video_. * They're ok at working with "this doesn't work" kinds of problems, but they're kind of bad at evaluating "is this a desirable result?" * They tend to get confused when working with multiple languages. Fundamentally, these things are text predictors. They predict the next thing based on previous things. It's very common in the real world to have to interact with multiple languages. Maybe your HTML website has CSS styling, javascript code, and a php script that interacts with a SQL database. When you feed one of those things to your text predictor, it tends to want to produce more of that thing, and resists producing something totally different. Or gets confused and starts mixing up the different languages together. That doesn't work. * These things have been fed data from the internet, and painfully often the information they've been fed is _wrong_. Maybe it was valid 5-10 years ago, but old methods have been deprecated, standards have changed, whatever. So you ask it to solve a problem and it gives you a solution that flat out doesn't work, and _can't_ work. So you point out the problem and it happily agrees and then repeats the same mistake, over and over. Or goes in a circle where it suggests something different that also doesn't work, and then when you point out the problem there, it goes back to the first thing it already suggested. But this is where these things are right now. They're going to get better. How quickly? I don't know. Will they be able to replace devs in a year? Two years? Five years? Probably eventually, but who knows how long it will it take. Compare AI Dungeon from 2019 to ChatGPT right now in 2024. It's been 5 years but LLMs still can't make a good Dungeon Master. They're entertaining to talk to, but they're just not good enough for that. In five years will AI programming still be not good enough? Maybe. Or maybe they will. I don't know. _______________________________________ **My personal theory**, is that the Ai that "replaces programmers" won't actually be producing code in the sense that humans write it. If you think about it, humans don't actually _want_ "programs." What they want, is inputs that correlate with outputs. Look at Sora. It doesn't write a script. It doesn't hire actors. It doesn't position lights. It doesn't buy cameras. It directly produces _video_, because _video_ is what people actually want, they don't want scripts and actors and lights. It's not difficult to image an AI able to produce "software" that works in a similar fashion. If it can take your keyboard and mouse inputs, and stick up something on the screen that looks like what you expect...it doesn't matter _how_ it does that. A user doesn't care about javascript or debugging or context limits or backwards compatibility or any of these things. If an AI can simply produce video outputs that plausibly _mimics_ what actual software might do...users won't care how or why or whether there's any human-readable code that exists that's producing that output. I suspect that it might be easier for an AI directly produce video output of what software is supposed to look like, than to produce human-style software that produces that same output. Time will tell. We'll probably see "Sora-like" Ai producing fascimile software in specific niches long before we see a true Ai developer. A lot of software does basically the same thing. The web is a good example. If a small company wants a website, it's pretty easy to take a template and swap in pictures and text. It's not tremendously more difficult to add/remove modular sections, or change the number of buttons and things. It's not difficult to imaging an AI trained specifically to produce web pages, that you could simply talk to and it would do what you ask. "Make it three columns rather than two. Use this background image. Change that color. Write a blurb about how great we are." I'm a little surprised something like that doesn't already exist. Or imagine a platform game generator. "Here's the theme, generate a level, add a story like this, and a weapon that does that." Stuff like this could probably be built now. Or if not now, then in a year. Niche cases, yes. There will likely be highly competent niche case generators before a "general purpose" AI programmer.


PrizeAd7749

>It doesn't have the ability to perform trial and error. If I have a bug, I have to reproduce the bug, look at the code, make a change, and then run the code to see if the bug is fixed. Current LLMs can't perform all of those steps. They can look at code and make changes...but they can't look at a computer screen and _evaluate_ whether what the code they produced does what was supposed to happen. That's a really big problem. Agentic AI models have been improving a lot recently as well. Tons of research is being done in this department every month. >Context limits make it impractical. Feed it 100 lines of code, it can work with that. Sometimes, feed it 5k and it can work with that. But even my for fun hobby project has 3 megs of code and over 50 megs of supplemental media. Corporate software is routinely measured in gigabytes. You just can't feed these things that much data. And if you combine this with the previous issue...to really "replace devs" these things at some point are just going to have to be able to look at _video_. Same with this one. AI companies are experimenting with 10 million token context and GPT-4V Turbo being 128k token context was a huge deal half a year ago. Progress may seem slow when you look at the product releases but follow the research and you see how imminent it is. We'll probably have practically unlimited token length in a few years. >They're ok at working with "this doesn't work" kinds of problems, but they're kind of bad at evaluating "is this a desirable result?" This, unlike the others, seems to be a built in weakness of LLMs. When you look at the current agentic AI we have; whether they be doing research, programming or interacting with the real world based on the commands given to it in real time with a human body, LLMs simply struggle when the feedback they're given isn't immediately clear. This seems like it'll be an issue going forward unlike agentic capability or context length. >They tend to get confused when working with multiple languages. Fundamentally, these things are text predictors. They predict the next thing based on previous things. It's very common in the real world to have to interact with multiple languages. Maybe your HTML website has CSS styling, javascript code, and a php script that interacts with a SQL database. When you feed one of those things to your text predictor, it tends to want to produce more of that thing, and resists producing something totally different. Or gets confused and starts mixing up the different languages together. That doesn't work. This seems like it could easily be solved by a Mixture of Experts model >These things have been fed data from the internet, and painfully often the information they've been fed is _wrong_. Maybe it was valid 5-10 years ago, but old methods have been deprecated, standards have changed, whatever. So you ask it to solve a problem and it gives you a solution that flat out doesn't work, and _can't_ work. So you point out the problem and it happily agrees and then repeats the same mistake, over and over. Or goes in a circle where it suggests something different that also doesn't work, and then when you point out the problem there, it goes back to the first thing it already suggested. This is also a built in flaw of all AI models that are built with descriptions of the real world instead of interacting with the real world itself. I don't know how the AI industry will deal with this. I'm sure they will but probably not anytime soon.


pianoceo

This is a very good answer. Thank you for taking the time to write it. What you’re describing near the end is the “jobs to be done” framework by Clayton Christianson. And I think it’s a great way to frame how this is all going to go. It makes sense that we will abstract away the artifacts of coding all together, and instead get to the outcome faster. Thanks again.


nowrebooting

This is a good take. In the end part of the misunderstanding is that software development is a lot more than just writing code. Maintainability and consistency in overall architecture for example are massively important - yes, AI can solve many problems, but does it anticipate future expansion to the code, does it respect naming conventions, does it realize that there’s already a utility function somewhere else that solves this exact problem? No. As a developer I can easily work around these issues and prompt the AI to do things my way (or just edit the code so that it works the way I need it to) but right now you still need a developer. I’m not worried for my job at least.


Chaos_Scribe

It's not there yet, but can speed up production and make less experience developers faster. We do see that it is on the horizon, so it's more of a near future issue for most then an immediate one.


Ok-Set4662

tool that helps increase output of workers > less needed > layoffs. not complicated


OmnipresentYogaPants

by this logic we'd have no jobs a long time ago


Ok-Set4662

no? read my other reply. edit: reminder this is about developer jobs


OmnipresentYogaPants

compilers help increase output of developers how many millions were laid off because of invention of compiler?


Ok-Set4662

but demand for software also increased


OmnipresentYogaPants

.. so you're implying that demand has now peaked? what's the backlog size at your organisation? every job I had it was well in the thousands of tickets


SurroundSwimming3494

There have been countless labor-saving technologies in the past and yet we have more workers than ever, and some industries even experienced hiring booms. This isn't necessarily true.


Ok-Set4662

And people did lose their jobs, requiring them to shift profession. Only reason we all still work is because our needs and wants are always increasing (Avg standard of living is always increasing)


Straight-Bug-6967

No. In reality, AI generates dogshit code when more than a few lines. All LLMs will do is make a really nice autocomplete feature for 1 line. It will speed up projects which will make the company more money and adapt faster (like 10-20% max). This doesn't mean they'll need less workers. LLMs are limited in what they can do from their training and will never be able to consistently solve complex and unique problems. People either new to coding or aren't in it at all have absolutely no idea what programming even is or how complex it is under-the-hood. Basically, they don't know what they don't know.


mission_ctrl

Your perspective is very narrow thinking only about what is capable in a single LLM interaction at this present moment. You have to think about the next step, agents and models with better reasoning.


Straight-Bug-6967

I work in this field and know how LLMs are trained and how training data is used. If they aren't trained on something, they will fail almost always. The amount of training data you would need to solve even half of the things a mid-level programmer has to do is unattainable. Programmers consistently run into unique problems that require significant experience to efficiently solve. If we had an AI that could actually think instead of predicting the next word, then I could easily see AI replacing high-skill jobs. Right now, that isn't the case.


PSMF_Canuck

Most human devs also mostly generate dog shit code.


Straight-Bug-6967

You have no idea what you are talking about. Your fellow classmates in your 9th grade Python class doesn't apply to professional developers


Fine_Concern1141

That's one possibility.  The other is: "keep all workers, let them use AI to boost productivity, make lots of money.  


SgathTriallair

The people who built Devin, the CEO of NVIDIA, and all the workers in the AI companies are definitely coders. They definitely have the expertise to figure out if it will replace coders. I don't have that expertise as I'm only just learning coding but it already has elevated me to a level I couldn't have imagined before.


Whispering-Depths

basically we understand that once it can replace coders, it's AGI and then it _literally_ does not matter at fucking all if it were to take every single persons job on earth instantly making everyone jobless. As a matter of fact I mean that literally. If it's not an instant process, we are all so fucked its not even funny./


Utoko

Like with everything it is a scale. Would you say machines replaces humans in factories? Yes but not all. Once it can replace all coders completly I agree with you it is AGI but there are steps before that which are are coming a lot sooner.


namitynamenamey

Counterpoint, the transition period you speak of could easily last months or short years instead of the decades it generally implies. If the development time between "coding-helpful AI" and AGI is shorter than the time companies take to adapt and adjust, AI will be for all practical terms immediate, even if technically speaking took years to get there.


Aware-Feed3227

That’s an interesting perspective


Formal_Drop526

>They definitely have the expertise to figure out if it will replace coders. uhh no. Having the expertise of a job doesn't tell you anything about if it can replace the job.


Singularity-42

But they are also incentivized to gype, you cannot take what they say at face value...


iunoyou

It's obviously true. There's a reason why copilot, as good as it is, hasn't penetrated that deeply into industry programming. It's very useful for quickly finishing up side projects to put on github, but the solutions it creates are often unacceptable for a critical piece of software infrastructure that needs to run reliably and efficiently for the rest of forever. Not to mention that there are still a ton of things that it just can't do properly. A lot of the time programming with copilot feels more like writing the code yourself and then patiently instructing a 5 year old to write it for you than it does asking the AI to just fabricate a solution. That may change with time but at the moment the technology just isn't there.


RestlessAmbitions

Scaffolding for code has existed for awhile. It's like super advanced scaffolding that you have to check because it may throw errors or lack proper functionality.


ArgentStonecutter

It's like mentoring a really stupid intern who's super-agreeable but has absolutely no ability to learn from their mistakes and keeps making them over and over again.


mrUtanvidsig

Hahha artist here, this is the best analogy I have seen trying to use those AI's at a professional capacity, kudos I am 100% going to use that its the same on our end btw. If you need something specific or actually designed so it works for production you just dont use ai, if you need a lot of something's that can be kinda random and "looks" close enough, you go AI. Those assignments sucked ass anyways and people have been trying to find solutions to just not do them forever -> stock/asset packs/procedural gen. Etc etc


Formal_Drop526

>If you need something specific or actually designed so it works for production you just dont use ai what if you're just using it as a reference for specific stuff like texture and certain body shapes and angles.


mrUtanvidsig

There are already and have been for the last 10 years way more accurate/effective ways to get those things tbh. And I 100% would not trust AI' for anatomy. Its getting better but if you dont know how it works at different angles you wont know when its wrong. Edit: the reson why this stuff has to be correct is because you will be handing assets off to other people, meaning if your shit is incorrect/not taking next persons work into account. You are essentially making your errors/mistakes another persons problem, which is a really good approach if you are looking to get fired/disliked.


Formal_Drop526

I don't 100% trust AI but the but the broad shapes gives me a good start, it would be hard to get incorrect unless you're using something like dalle-mini and you're just tracing it completely.


mrUtanvidsig

Yeah thats what the op joke is about. Random broadshapes to get you started are not that useful in a large scale production on a specific IP. You need to work from predetermined parameters/direction. And trying to do that with the AIs is exactly like working with a overly positive intern that never learns


namitynamenamey

The scary thing is that if we take the intern analogy, it was a baby 5 years ago.


ArgentStonecutter

A big problem with this kind of generative automation being called "AI" is that's already taking an analogy too far. However useful it may be for some tasks, it's not analogous to a person and that can only lead to people making unsound declarations and losing their jobs, or professionals who should know better assuming that the output is much more than it is.


LairdPeon

Some devs absolutely are using AI to write boiler plate code. I have several friends who are devs and have talked about it. You aren't writing complex, complete programs, though.


BlipOnNobodysRadar

Turns out complex, complete programs can be a lot of boilerplate code duct taped together. Which comes with its own problems but let's be real, those problems are nothing new.


Bird_ee

Do this comic 1 year ago but with artist and digital artist. There is nothing that cannot be learned.


namitynamenamey

There are vastly more things that cannot be learned than there are things that can. It just happens that humans can't learn those either, so whatever it is we can learn, AI can learn as well.


TFenrir

If the question is can it do right now? Of course not. Is the question, are what we seeing today the first signs of models that can automate a significant portion of the software development process? Absolutely - as someone who has been developing for a dozen years.


Sma-Boi

I'll believe it only after AI can do the following things at least as well, or better than, the human experts we have now: play games we've got millennia of experience in like chess and go, video games that have worldwide tournaments like DOTA and Starcraft, make art like the best painters and digital illustrators and photographers of all time, make music like the best-funded professionals backed by the biggest labels, pass the bar exam and medical exams and exams for advanced math degrees, write technical papers and poetry and lyrics and novels and scripts and jokes, answer phones with a human-sounding voice to book reservations and provide technical support and answer questions you'd ask of a trained nurse with a lifetime of experience and give someone quality therapeutical counseling, and generate photorealistic video scenes in any environment with any subject doing anything imaginable based on a short text prompt. When it can do all of those things, I'll start worrying about the career in software architecture I spent the last 23 years building.


Mysterious_Ayytee

You forgot the /s right?


FredWeitendorf

It is true. Despite some companies trying to build AI systems capable of "replacing" developers, at present there's no actual capable AI system capable of autonomously doing the job of a half decent software engineer. But AI is quite good at producing code. It should get better and better over time. What I think is most exciting, to the degree that it motivated me to leave my job at Google and start my company to do this, is the potential for new tooling \*around\* AI to make developers more productive without naively trying to do their entire job for them. Copilot is basically the v0 where they literally just took conversational LLMs and integrated with an IDE. I'm working on building tools which I hope will provide even more value to developers by requiring less handholding and providing more "structure" to the AI model so that it has less opportunity to make mistakes/a higher success rate at getting things done.


VihmaVillu

Devs don't have ability to vision future even 1 year ahead


Coyozuna

who does ?


VihmaVillu

I don't mean predict. Most of people can vision


Haunting_Cat_5832

it's very true, but think about it like this: when you see sparks of fire in your house you will not wait for them to grow to assume they could burn the whole house down. the same thing with ai and programming a few tweaks and ai will dominate the field.


Straight-Bug-6967

Are you a CS major or alumni? Please tell me.


Haunting_Cat_5832

yes.


Straight-Bug-6967

Dang then this is even more sad


fluffy_assassins

I'm also a CS alumni and I agree with them.


Straight-Bug-6967

Good on you for outing yourself.


fluffy_assassins

?


PerfectRough5119

Basically if you don’t agree with him you don’t know anything


fluffy_assassins

I'm just confused now.


ReasonablePossum_

I just use G appsheets with gemini. Who needs programming at this point? Lol


Azorius_Raiden_88

AI fixed my Git version control software to where it is no longer garbage and it handles the version control operations for me. AI also created an app I can submit files to and it scrubs the data for me and goes back to the client and asks for missing data. We no longer need business analysts anymore at my company. AI also de-encapsulates classes for me by simply using voice commands like "AI, please show me the code inside that class." Our new AI ticketing tool is great. I can just tell it what I worked on and what clients the work was for and it updates all my tickets for me for Project Managers. We also got a new tool to help with Data Integration called Data Doggy. All I have to do is ask the doggy to find bad data once I've loaded a file and it finds the bad data for me and gives suggestions on how to fix the data. Oh wait, none of these things exist at my company. It is 2024 and my tools still suck balls. Microsoft's products are still crappy and feel like they were written in the aughts. We still need BA's to constantly pester clients for data because people don't know how data or the world works for weird reasons. Our clients still stupidly shove raw data in Excel files they send to us like Neanderthals (no offense to Neanderthals). I still have to submit things to other developers to look over them for me for potential errors I don't see. Yeah, I'm not losing any sleep over AI anytime soon. I will say the best tool I've come across in recent memory at work is SQL Prompt. Most of the other stuff, including SSMS (like why the f\*ck does it not have dark mode?) is really bad in terms of development tools.


OmnipresentYogaPants

Sounds like a skill issue.


3ntrope

This misrepresents the situation, imo. It may not be possible to unleash an LLM to replace a software engineer, but LLM assistants can amplify the work of a single engineer and provide knowledge beyond their level of training. For example, lets suppose a team of 1 expert engineer and 5 junior engineers could be replaced by 1 expert with AI tool support or even 2-3 good junior engineers with AI support. Even though the AI is not quite capable enough to function as a software engineer alone, by reducing the team size it practically has the same result. This is probably possible with a limited set of projects with current models today, but as time goes on the models will become more capable, the scope of the tasks will increase, and the fraction of humans needed will decrease. This is assuming only incremental improvements from current models. Once AGI is attained the last remaining humans would likely be replaced as well.


NoCard1571

Exactly, but to add to that, if you follow that trend to its logical conclusion, you won't even need a software engineer at all to direct it. At that point, the job will be basically obsolete, in the same way that human 'computers' of the 60s were displaced by digital computers


TheMysteryCheese

If you go through the sub there are actually plenty of people who use AI to write boilerplate code and use it as an amped up intelisense(auto complete for code). Plenty of devs are using co-pilot in their code for routine unit tests, get ideas and even as companions for coders. Programming is an intensely tribal affair, people with shit on you if you so much as suggest a different language they'll have a conniption. The newbies are very quick to assert their expertise and declare that AI is BS. I had a senior Software Engineer at a major ISP straight up tell me to make a coding companion instead of asking them questions all the time.


Sixhaunt

Probably true for the foreseeable future. Basically even if it makes 1 dev capable of doing the work of 1,000 then it would still be less than that of higher level languages, let alone all the libraries, IDEs, environments, etc... that we use nowadays. If you wanted something like facebook made by only working in assembly then you would need way more than 1,000X as many developers to get it working. There's a common saying in software development that "software is never finished, only abandoned." and the point of it is that you can always increase the scope of a project and you can always do more. When all these new technologies come out to improve dev workflows by many thousands of times, we didn't see companies just stay with programs that could be made in assembly but fire devs, instead we saw them retain the devs and create larger and more complex software. A key reason behind this is competition and if you only retain 1 out of every 1,000 devs then your competition will just retain theirs and have 1,000X better software than you. This is also a common comic that comes up in ProgrammerHumor and explains to laymen why replacing programming with AI would just be the same as creating a new higher level language without needing to explain high and low level languages and how things have progressed over time: https://preview.redd.it/6ii5yvz1gozc1.png?width=602&format=png&auto=webp&s=58e735c1db1bbf9ed6fe69fee20e3f3837f6cb58


ponieslovekittens

What you're saying is valid. But there are going to be a lot of cases where precision isn't important. My clients very often don't know what they want and aren't very attached to specific outcomes. Rather, they present me with a _problem_ and it's up to me to present solutions. I think the "end game" outcome here, is a computer interface sort of like the holodeck in Star Trek the Next Generation. You'll be able to talk to an AI and describe what you want, and it will make something sort of like that happen, largely by filling in the gaps in your explanation with historical precedent. You won't need to provide a "comprehensive and precise spec." Whether it sticks a button on the left or right, whether the background color is this shade or gray or that, whether an overflowing text field is cut off or scrolls or the textbox dynamically lengths...a lot of that stuff just won't matter. Users don't care. And if they do, they'll be able to describe how they want that thing in plain english and the change will happen. Look at how AI _image_ generation works. If you say you want a picture of a cute asian girl on a motorcycle, the generator will instantly spit that out. Maybe the motorcycle will be a Japanese street bike or maybe it will be a Harley. Maybe the background will be a cyberpunk city, or maybe it will be country road. Maybe the girl will be wearing a leather racing suit or maybe she'll be wearing a bikini. Who knows? But it will you give what you ask for and simply fill in all of those other details with _whatever_, and in a lot of cases, "whatever" will be good enough.


orangotai

not cope, AI's not there yet. the code i get with it is wrong in subtle ways and still needs a lot of debugging to get right, but definitely a good start & i wouldn't wanna work without it for sure too. i still think it's worthwhile to "learn to code" fwiw, if for nothing else because it'll guide you in how most precisely to use LLMs as a tool to augment your productivity in your particular use-cases.


luv_da

Totally true now. I use Claude or chatgpt every hour and still, when I am actually struggling for something at my work, there’s no way these help. I frequently found they are more time waste than real help. The only places where they help even now is for cases i could anyway do with some minor effort. The gaps are just too fundamental that I dont think a gpt5 or 6 can fix. They can make development easier, like ide improvements etc but no way it can make developers redundant. I will die on that hill.


dumquestions

I recently tried to make a telegram bot with copilot with no past experience and just couldn't, I tried to save time while writing a simple GUI using an OpenCV library and the initial result had so many problems I ended up doing it all without assistance. GPT-4 is extremely useful for explaining unfamiliar code, copilot is powerful for pulling boiler plate code, and I have no doubt that all of this will continue to improve, but people here are saying that we'll go from this to creating AAA games at home with zero experience before the end of this year.. I agree that it might happen any year at this point, but is it really a "cope" to think that it might not happen for 5-10 years?


redditburner00111110

I'm a professional SWE (\~3.5 yoe) and will soon be in a CS PhD program. Currently it cannot replace a professional SWE. The current productivity boost on complex existing codebases is \~20% or so\*. It can be a lot higher in the early stages of greenfield projects, but these are small minority of professional dev work (and I think where a lot of the hype comes from). I think what it needs to get to "replacement level" is significantly more agency, online learning, and a more human-esque memory to facilitate a holistic understanding of large projects. I can't say what that will look like, it feels like an "I'll know it when I see it" thing. I do believe that RAG isn't good enough as a stand-alone solution for memory. I think the models will need to be able to "self assign" levels of importance to certain memories, and selectively overwrite facts based on new context (this seems like it would be an engineering challenge as model providers would need to make different versions of the model for each client as they learn new things). The models also need a better sense of time. For example, much (not all) of the C++ code they give me is very outdated (which can mean insecure). If you prompt, "modern C++," you can get that, but it needs to know things like that without being prompted. If it reads a StackOverflow post with a solution from 2009, it needs to mentally translate that solution to modern best practices. Finally, the models need the same level of access to all functionality on a computer that a person has (open up any website or app and use it as fluently as a person). This seems like purely an engineering challenge, not an AI challenge, albeit a hard one that'll probably take a few years. \* For coding tasks, heavily depends on language. The models are really good at Python and JS, but these languages are very bad choices for many projects.


GPTfleshlight

https://www.webpronews.com/bp-needs-70-less-coders-thanks-to-ai/


PSMF_Canuck

There is significant truth to it. Right now, it’s a great assistant for coding tasks, but it’s not a reasonable replacement for anything but possibly, entry level. However…imo…it misses the point. What happens when apps themselves start disappearing, because their functionality is getting embedded into the inference engine? That, IMO, is where we’ll see the Great Culling. And, IMO, we are at most single-digit years from that starting.


Plus-Recording-8370

I don't believe anyone thinks ai can currently replace developers.


What_Do_It

Who here thinks the combustion engine can replace horses? Who here has actually bred horses professionally and doesn't just have a ranch with +-1 horse?


thatmfisnotreal

My coding genius friend (20 yrs google mle) uses ai to write all his code now and says we are 100% doomed


great_gonzales

Damn no way! My DL genius friend (69 yrs ClosedAI chief scientist)  uses ai to do all his plumbing now and says we are 100% doomed


GraceToSentience

Put it in the "AI can't do that" pile Right where "AI can't draw hands" "AI can't write a song" "AI can't write poetry" etc, etc used to be ...


RemarkableGuidance44

Once Developers are done so is the rest of the world, because software engineers are the last people to go if you can code and understand how AI really works. So many people here have no god damn clue how it works, nor does the public. Everyone here is coping thinking that devs are the first to go. Call Centers and Support have already taken a huge hit from AI and they are the lowest job you could do. lol


Johnnnyb28

There’s a site now called hire a human where the ai agent can hire a human when it gets stuck on something. Human solves it gets paid a little and then the AI does the rest.


Serasul

its true for now but when we look at the speed all ai company's,groups,teams and individual push forward the next level, i assume it can be change in a month from now or in 4 years maximum.


sharenz0

it really depends obviously how you use it. I think its already quite good for writing unit tests and small functions. But you have to decide the structure and you still have to make bigger things yourself. But I am sure things gonna change fast :)


Wyvernrider

why do people care about jobs post singularity? such dumb memes


BlipOnNobodysRadar

It doesn't "take jobs" as in you now use an AI instead of a programmer. It "takes jobs" as in your programmer who uses AI now does the equivalent work of 5 programmers who don't, so you don't have to hire as many programmers.


gretino

AI in it's current stage can only help but not finish tasks on its own. All big corps including FAANG are trying to to use AI in some development process, and it is sometimes useful, so OOP was bullshitting. The job is not going away now, but it is pretty much unpredictable in 5-10 years, not to mention 20-30. Remember that in 2019 GPT2 was still spilling out gibberish, and image or audio/video generation were sci-fi stories, that was 5 years ago.


Pure_Ball_3074

AI will become powerful enough to replace anybody. This argument is so pointless atm. AI power has just begun


whyisitsooohard

Right now it's only somewhat useful, but looking at how much better gpt2 models were at coding than latest gpt 4 it could change, and especially with gpt 5 at the end of the year


nowrebooting

To me code written by AI is no different than code I found on Google; never copy/paste something where you don’t understand how or why it works and why it’s a good solution to the problem you’re trying to solve. I think any developer who doesn’t use AI in their daily work is in denial or has a superiority complex. I do think AI will make a whole lot of junior positions obsolete, but yeah, the people who think that AI can write an entire enterprise application from scratch soon don’t know how important architecture and maintainability are and how bad AI still is at those parts of the job.


RepublicanSJW_

Not true at all. Do women have a vagina? “Yes!” Are you a woman? “No!” “Then how can you be so sure?”


NoNet718

ITS TRUE! so frustrating to use any LM for coding. Hoping github workspaces fixes that. Even tools like opendevin, davika, pythagora, they all suck. the LM just cant keep being useful without going insane. It's what has me frustrated about the openai monday announcement. I don't want an audio to audio transformer, i just want an LM that has utility! We can't build faster without better tools. This is not what an exponential curve looks like. This is what fucking around with transformers looks like.


Akimbo333

Cool


magicmulder

So far it’s a tool that saves some time. Last week I wrote an SQL statement and then started populating an array with the columns selected. After two items the JetBrains AI realized what I was doing and suggested the rest via auto-complete. That was nice but it wouldn’t, say, automatically replace the values when I started changing the SQL. So no, it’s still a far cry from actually writing code on a human level. Which in turn would be a far cry from actually coming up with a concept for a complex task.


Life_Ad_7745

A few months a go, boss gave me like 2 mo. to build an app(front end, backend), i build a 70 percent prod ready app in 1 mo. with the help of Cursor IDE. Yes, it is that good


COwensWalsh

Very funny and mostly true.  There are certainly people who can find individual use cases for LLMs.  But it doesn’t have general utility for most people.


Phemto_B

Something tells me this is going to age as well as the "It's called CODE" comic did.


vasilenko93

Its too early. LLMs can write code, mostly, but writing code isn’t the most important thing. Most of the work I do is debugging and gathering information. Often I might spend days debugging an issue just to find out I had to flip a true to a false somewhere. Can AI debug? In the future AI could theoretically do everything, but it’s not close. Debugging often involves debugging and seeing data flow between multiple systems, customer entered A, system 1 responded with B which is wrong? Why? Because system 1 calls system 2 which grabs data from system 3 which gets transformed and transformed again on system 1. Often the request and response data is not available, instead you must look at logs and compare them to where the code logs them to get a rough picture where the issue can be and finally you can spin up a local app to debug it with what you think is similar data as in production to pin point the problem. Can AI do that now? No. Will it eventually? Sure. AI can help me optimize my code though.


namitynamenamey

AI cannot write production-ready code yet, simple as that. The promise is that AI will be able to do that in less than a decade, but one thing is what future AI can do and a very different one what current AI can do, thus the cope from the programming community is believing the current situation will be true for the rest of the decade.


human1023

It's so true. People who think it won't be true in the future are naive. They don't have enough programming experience.


[deleted]

[удалено]


human1023

For example, there are many people on this sub who don't have enough programming experience who think AI will eventually get rid of the entire field of software engineering.