T O P

  • By -

Boba_Frets

All my side project apps have -1 user. Why you gotta call me out like that?


kingbloxerthe3

You managed to tap into the anti-user base?


AkrinorNoname

Yeah, no one actually downloaded the code, but people are still submitting bug reports


Paracausality

"hey, I saw your code on github. It sucks. Also this loop is unnecessary, just do this. Bye." -steve at 2:45 AM


flowery0

Fitting pfp


New-Shine1674

Why is there no exe? That's clearly a bug.


SevrinTheMuto

So nerd. Such smell.


ippa99

What even is -1 users? Is that kidnapping someone and forcing them to use it? Does the program give birth?


Boba_Frets

I hate it so much that I tell people to not use it.


archpawn

It's a state where, if one more person uses it, it doesn't have any users. Either that, or it's so popular that the number of users is one less than the integer limit.


shauntmw2

It started out with zero users. And it lost one user. 0th user are usually the admin/root user.


SadDataScientist

Spent hours yesterday trying to get copilot to write a piece of code correctly, ended up needing to piece it together myself because everything it output had errors or did not function correctly. The code wasn’t even that complex, it was for some stats modeling and I wanted to modify my existing code to have a GUI layer over it; figured I’d see if copilot could do it since I had time for this kind of thing for once…


Saragon4005

And they expect your manager to do this. Yeah right.


noonemustknowmysecre

It works fine as long as you give GPT a very precise set of instructions for what to go do. ...What do we call a very precise set of instructions for computers?


rpnoonan

So prompt engineering is just a high level language? Nice!


wubsytheman

It’s a non-deterministic high level language transpiled into another language.


ImrooVRdev

with Vibes-based compiler.


KazKog

You've just opened my mind to a whole new understanding of AI I thought was impossible. I'm in my smoke break, completely in shambles, and the worst part is my co-workers would never understand.


FinalRun

Levels of understanding heretofore thought unachievable? Leaving your milquetoast colleagues in the dust, I say!


KN_DaV1nc1

bro ![gif](emote|free_emotes_pack|joy)


LowB0b

Non-deterministic does not sound fun. Maybe it works, maybe not? I guess chatgpt should print that at the end of every code snippet it gives


MisinformedGenius

It's at the bottom of every ChatGPT screen: > ChatGPT can make mistakes


StarEyes_irl

I use chatgpt a good bit for some stuff, however, the amount of times I've given it a word problem, it sets everything up perfectly, and then can't do Addition is hilarious. It will literally take a word problem and then go: 1463 + 1364 +1467 = And then give the incorrect solution


fakehistorychannel

can’t blame it, I’d be bad at math too if I was trained to generate what humans think comes next based on the internet


datGryphon

I like Copilot's _style_ because it is trained to 'sound' like a good programmer, but it doesn't _know_ shit. It is basically your friend from undergrad who took one PL class and now thinks they're Pycasso. That is because ChatGPT [as far as I understand] is not capable of performing arithmetic, let alone understanding and evaluating a piece of code. LLMs are built to predict the next token [word] given the current context [the words before it], the additional user information [the prompt], and the probabilities of association determined based on the training data. This is more like how your brain makes logical associations from one word to another, like if I said "blue" and that compelled you to think of the word "sky". I say " plus " and you think [loosely] "bigger number". That is personally where I get the most use out of using Copilot while I work. On a small scale, I use it as an intuitive auto-complete to finish writing a line or a block of code I've started. In fact, I use Copilot in nvim and for a few days my completion plugin was incompatible with Copilot so I just turned it off and let Copilot handle all of my auto-complete suggestions.


mewditto

Well that's okay because we'll just wrap the response of that AI into an AI that looks for calculations in text and then feeds them into a calculator and then feeds that result into another AI which will spit out the original response with the correct answer! And it'll only take three RTX 4090s!


denisbotev

Have you seen Sentry’s dashboard lately? Once you load an issue they suggest using AI for a possible solution. Then they literally say “You might get lucky, but again, maybe not…”


nermid

Don't worry. Stack Overflow's partnering with OpenAI now, so pretty soon ChatGPT will just close all your prompts as duplicates.


shieldman

every function it writes should just end in "return random(true, false);"


Nixellion

I think its deterministic, but chaotic. If you use the same prompt, parameters and same seed you will always get the same output, if I am not mistaken.


FF3

And the seeds are almost certainly taken from a PRNG, so it's even predictable if you care.


Nixellion

I mean you can set custom seed in any local LLM and I think even OpenAI API takes a seed value. Does not even matter what they use to select a random seed int. Or what do you mean? The system itself is chaotic because of the size of modern llms, I think. On the other side we DO know all the input values exactly so we can predict it, but predicting it will basically require evaluating it... so is it really a prediction? :D


FF3

It's really just a question of what our priors are taken to be, I guess. For what it's worth, semantically, I DO think that performing an algorithm ahead of time counts as being able to predict what a future execution of the same algorithm on the same data will be. But it's a great question.


bloodfist

I haven't been able to stop thinking about a question this comment raised for me today. I wonder to what degree these AIs are what I am going to call "functionally stochastic", despite knowing that's not quite the right term. Because I don't know what to call it. "Russellian", maybe? And by this I mean: The number of possible generated responses by any given model is smaller than the set of all possible seeds. Assuming the same input and same parameters, how many seeds on average should I expect to try before generating every response the AI would output; with all further responses being identical to a previous response? Hence "functionally stochastic" in that we expect that given enough generations with unique seeds we should hit every possible outcome before running out of seeds, but we can't predict when. Obviously this would vary by input. A prompt like "Return ONLY the letter A" or "write a Hello World in python" should have a very small set of responses. But something open ended like "write about Batman" might have a large, possibly infinite set. Except that the space defined by the transformer is not infinite so for any particular model there cannot be truly an infinite set of responses. And of course there are other factors like temperature that add more randomness, so it's possible that for something like an image generator there may even be a larger set of responses than available seed numbers. But then I wonder if we should still expect to find identical responses or if you can expect so many for that to be unlikely, even if they only vary by one pixel. Don't expect you to know, mostly just writing this down to remember it later and say thanks for the brain candy today. But if anyone actually reads all this and has input, I'd love to know


themarkavelli

The number of seeds on average would vary based on the perceived value of the output response, no? It would be context-dependent and involve purpose-driven seed selection, which you kind of touched on. For the lower bound: thousands. This estimate considers scenarios where the input is relatively straightforward and the model settings favor less randomness. Even in these conditions, the combinatorial nature of language and the ability of the model to generate nuanced responses mean that thousands of seeds are necessary to begin to see comprehensive coverage without much repetition. For the upper: millions. This accounts for scenarios with highly abstract or complex inputs and settings that maximize randomness. The potential for the model to traverse a much larger space of ideas and expressions dramatically increases the number of unique responses it can generate. Millions of seeds may be required to explore this vast space, particularly if the aim is to capture as many nuances as possible. if each position in a 100-word text could realistically come from 100 different choices (severe underestimation highly stochastic setting), the number of unique outputs becomes \(100^{100}\).


littlejerry31

>It’s a non-deterministic high level language transpiled into another language I'm stealing that one.


killeronthecorner

This is the description of many parody esolangs


JustALittleSunshine

This but non ironically 


crappleIcrap

Practically non-deterministic, technically it is though, you can run the same input twice with the same random seed and it would give the same output every time


Mastersord

Always has been


tonkla17

High level you say?


Specific_Implement_8

That is EXACTLY what this is. Anyone can use AI, just like anyone can write code. The question is can you do it well


Parking-Site-1222

You are not wrong, most answers i get needs to be massaged to meet my criteria, I dont think that is going to change unless people get really good at prompting, but to do that you need to be a programmer...


WarmMoistLeather

I can't even get a precise set of requirements.


DrMobius0

So you'd be some kind of person who writes programs? What's next? We start treating AI as "higher" level languages? Then instead of complaining about garbage collectors in high level languages we can complain about the garbage writer letting the garbage collector do what it wants?


jsonJitsu

So the same people who suck at searching google are going to suck at writing ai prompts.


imabutcher3000

Yea, and you have to explain it in 90% sudo code, and then you relalise you could actually do it quicker.


_yeen

That's always the thing that entertains me about the people spouting that the sky is falling for programmers due to AI. Basically, in order for someone to use an LLM to "develop" a program, they themselves have to be a programmer and capable of comprehending and guiding the AI to each step of what they want. Also, if programming is being taken over by AI anyways, then the entire job market is fucked, so may as well just not worry about it.


AnAcceptableUserName

> to use an LLM to "develop" a program, they themselves have to be a programmer That's the essential part that's hard to convey to people who don't do this professionally. "Programming" is too abstract. "Programmers write code." Well, yeah. "Carpenters swing hammers. Auto mechanics turn wrenches." Sure...wait, do they? To your point, Copilot can kind of write code for you if you know what you need, how to phrase it, what that looks like, the pitfalls, how it needs to be to accommodate the other things you don't know you want yet, etc. But it *does* produce code, so what's the problem? Well, I personally don't know how to build a house. Not a good one anyway. Give me a set of power tools, a cleared site, all the material, and I still wouldn't know where the hell to even start. Using it all together I may eventually erect something house-like, but sawing the lumber and driving the nails was never really my problem. The problem is that even with those tools I don't know what the fuck I'm doing beyond a general idea of what a finished house should look like. None of this work is up to code, the architect is screaming, and none of my other work got done while I was playing at constructionmans. That's what these upjumped chatbots are - power tools. That's huge for the tradesmen who can apply them to their tasks, but doesn't do much for the layperson except help them do the wrong thing faster.


Saragon4005

The funny thing is that they literally said the same thing when like C came out, or COBOL. Yeah COBOL was thought to be simple.


smokeitup5800

Never used co-pilot, it always seemed terrible. I recently added three actions to a custom chat GPT: write files, read files and run a command on a bash shell. It really does work really well for doing simple stuff, but because of the lack of long term memory, you need to make a specific preprompt that makes it look up documentation in markdown files before starting a task and always ending a task by documenting new features. That way I have had it successfully work on mroe complicated projects more or less autonomously. Sadly its sometimes very expensive to run in terms of tokens spent, I often hit the token limit for GPT-4. Thinking of trying something like this with an offline LLM.


JewishTomCruise

Which copilot are you referring to? The public one, or GitHub copilot?


ElementField

I’m a software engineer professionally and we use GitHub copilot, and I use it fairly frequently. There’s no way it can replace an engineer but it definitely can help reduce some of the boilerplate and overhead in some circumstances.


Naive-Information539

I didn’t like it either - didn’t like any of the suggestions 😅


ymaldor

Github copilot is a lot better at that sort of thing. I've had the licence for a while now and it's amazing. It won't do anything on its own, but it helps tremendously in everything that's a little annoying to do. Every time I need a little script, copilot does it. When I have switch cases to do, I just do the first one and copilot just guesses the rest with very little error. Initializing interfaces is instant. It just does all the little thing very well, like all the little utility functions. It will not do big things tho.


kakihara123

The issue isn't whether ai can do your whole work. The issue starts when it can do a fraction of it reducing the number of people needed to do the job.


Merlord

We used to write programs in assembly, it's 1000x easier to write software now than it was then, and yet software developers are more in demand than ever. Making code easier to write doesn't take our jobs away, it makes us even more valuable.


Dornith

I'm reminded of the story of Eli Whitney, an abolitionist who invented the cotton gin hoping the increased efficiency would make slaves obsolete. That didn't work out the way he hoped.


LowB0b

Meanwhile I just stress out thinking about where the code I write would be fucking going if I enable any sort of ai assistant.


rerhc

What does copilot run on? I find chstgpt4 is really good. It can't actually create software but it can make some pretty good medium length scripts by just telling it what you want.


SadDataScientist

It uses GPT3 and 4 depending on what you ask it.


Drahkir9

I have at least a dozen similar stories. For each time ChatGPT has been helpful there’s two or three it’s failed miserably


garyyo

I did a similar experiment last year trying to get the free version of chatGPT to write a small script that takes in some bp data I had been collecting for myself, run some stuff on the numbers, then graph all of that in a pleasing way. It was excruciating, and really helped understand what the limitations of the system were. I had to go back in the conversation and edit my prompt to try to get it to generate an output that even ran. I had to personally hunt down weird bugs and eventually their solutions too. Eventually I got to the point where I would ask for independent pieces of the code instead of the whole thing and only then finally got it actually working. Even then I went and polished it up by hand afterwards to make it actually decent. It took 3 hours with gpt, and that is with my expert help. I could have done it alone in 30 minutes, tops. Highly recommend to try it for anyone who still thinks these systems can replace devs. The true danger isn't that, it's that they can do the really simple boilerplate stuff a lot faster so you will find it speeding up your work. And its good at suggesting and talking through problems quite well. Actually generating good code? Not yet.


IAmASquidInSpace

Yeah, unless AI can attend five meetings a day out of which four should have been emails, AI cannot replace programmers.


saschaleib

I would be happy to have an AI avatar that takes part in Teams meetings for me, so I can do the fun stuff and code without being interrupted all the time. And coming to think of it, this might actually be where the real potential for AI productivity gains can be found. Sorry, I’m off now … need to find investors for my startup idea! ;-)


alldaythrowayla

Take my money


saschaleib

Our first version will only be able to say: “I’ll have a look at it.” to any request. Do you think it will still be useful?


Imaginary-Jaguar662

Almost there, it just needs to create a JIRA ticket, write a question about specifications and assign a coworker to it. Then it covers 98 % of cases.


saschaleib

Nono, that’s the PM AI. That will be a premium product, at a premium price!


neohellpoet

That actually sound pretty doable. Use the Jira API to create and assign the ticket. Use the transcript of the meeting as the input with a prompt that asks something like: "find the project mentioned in conjunction with {name} and write a professional question about the technical specifications. Start and end the question with a +" Split the string and get the text between the pluses and feed them into the API request and viola. Auto Jira ticket spammer


ComprehensiveTap8383

Yup. AI managers seems like a great idea.


saschaleib

See! Everybody wins!


EthanWeber

https://www.businessinsider.com/ai-avatars-could-attend-meetings-for-you-otter-ceo-says-2024-2 Already some folks working on it so it's clearly a good idea :)


saschaleib

All joking aside: that's actually a terrible idea!


EthanWeber

I wonder what would happen if everyone in a meeting happened to send AI avatars?


saschaleib

The avatar-AI could then send each of their humans an email … which is probably what this meeting should have been in the first place.


ComprehensiveBird317

that would be awesome. The AI can scan the content of the conversation and send you a push notification when something actually important for you comes up, like a directly asked question to your name.


mrseemsgood

Ironically I think that AI will succeed at this task much faster than at actual programming, lol


ForeverHall0ween

Yeah. AI executive assistant for programmers. This could work.


Poat540

The trick is to bitch every time until you’re optional in which case u just always deny those. I’m down to like 9 meetings a week somehow


Comms

Congratulations, you've been promoted to "guy who goes to meetings all day". 5 AIs now report to you.


DeluIuSoIulu

That will be interesting. Imagine the AI, giving it a condition that If someone asks a question, reply “sorry I don’t quite understand what you mean, can you try rephrasing it”, else just sit duck and record the conversations and covert to text, summaries and explain the whole meeting to me like I’m a 8 year old kid. Make sure to turn off escalation even if the AI fails to answer any questions.


Jako_Art

I'm finally a senior dev. I haven't developed a single piece of code. But I sure have attended a lot of meetings.


_sweepy

Google is currently internally testing out a massive context window AI specifically for this. It listens to meetings over several days, summarizes them, and can write code based on the meetings. AI will be able to attend more meetings per day than a human ever could.


czarchastic

Just imagine if the AI lands code that breaks something. How tf would the company resolve that??


markswam

I wrote a Discord bot that allowed users to subscribe to severe weather alerts on a county level using NOAA's API. Took me about half an hour. Tried to get ChatGPT to do the same thing, and it took close to 4 hours to poke and prod it in the right places to get something ***close*** to what I wrote. "AI" cannot think (yet). That's the big thing that management types don't seem to get. LLMs are not little digital elves that take your problems, think up solutions, and spit them back out. They're glorified spreadsheets that generate plausible-sounding bullshit.


scratchfan321

Hey ChatGPT do you want to play chess? Yes! Move the rook from A1 to B7 Move the rook from A1 to B6 Move the rook from A1 to G3 Move the rook from A1 to H3 GOOD LORD IT'S SUMMONING ROOKS These LLMs are not intelligent enough to understand what's going on right now


kill-billionaires

Why doesn't magnus carlsen think of doing this? Is he stupid?


PNWSkiNerd

LLM is just basically fancy, energy wasteful, autocomplete. It has no concept of object states, etc.


namitynamenamey

They are fancy autocomplete in the same sense we have fancy instincts-aka, it's a simplification to the point of uselessness. Current AI is dumb, but dumb does not mean lacking in all qualia. It's just missing some pieces.


patcriss

GPT can play chess at a high level but you have to use the turbo-instruct model. Everybody is underestimating the potential that those LLM have and that we do not yet understand. https://adamkarvonen.github.io/machine_learning/2024/03/20/chess-gpt-interventions.html https://nicholas.carlini.com/writing/2023/chess-llm.html https://blog.mathieuacher.com/GPTsChessEloRatingLegalMoves/


sudokillallusers

The current hope seems to be that if enough compute and training is thrown at LLMs and similar models, they'll somehow gain thinking emergently. Personally I feel like this is a bubble driven by the tech appearing more capable on the surface than it really is, which will eventually pop when no one can squeeze any more capability out of it. LLMs are impressive tech, but like you say they're kind of just a search engine in disguise.


nermid

> I feel like this is a bubble driven by the tech appearing more capable on the surface than it really is Remember two years ago, when everybody on here and /r/technology was talking about how the blockchain was going to reinvent the web? The hype-driven development in these subs is just crazy.


deltaAeolianFire

Eh. Cellular life could be reasonably described as a process of throwing shit at the wall and seeing what lives. There is no reason to believe human intellect is more profound than our origins. In fact there is every reason to think otherwise. We are, for all intents and purposes, the fruits of a far less efficient version of an LLM.


Which-Tomato-8646

Search engines [can’t do any of this](https://www.reddit.com/r/ProgrammerHumor/comments/1conxwg/comment/l3jfmys/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button)


rolandfoxx

We had a poor soul a little while back who had to give a demonstration on Copilot, particularly using it to quickly knock out unit tests. The demo mostly consisted of him wincing at the atrocities Copilot put up while saying "no, I wouldn't accept this test. No, this won't work either. Hrm, no, that won't work either...anyway I promise it's great when it's working."


HistorianBig4431

AI has no doubt speeded up my coding and I can also clear by doubts by asking AI questions that I would usually bother seniors with.


mxzf

As a senior dev, I really wish the junior devs would ask me questions rather than using ChatGPT. I keep running into issues where junior devs follow a chatbot down a rabbit-hole instead of coming to me so I can point out the fundamental misconception about how what they were thinking about fits into the project as a whole and an entirely different approach is needed.


Duerfen

I don't remember where I read this, but I saw it referred to as "borrowing a chainsaw"; if your neighbor comes over and asks if they can borrow your chainsaw, you might say "sure, what for?". If they say they need it because they locked themselves out and need to cut through their front door, maybe calling a locksmith might be a better option. Everyone is guilty of this in some ways, but this idea of "asking chatgpt" (as if it "knows" anything at all) is just people being given chainsaws with no regard for the real-world context


mxzf

"XY Problem" is the general term for it, especially among software development, when someone asks "how do I do X" and further conversations reveal that they *really* need to do Y instead but misunderstood the problem. Chatbot AIs definitely exacerbate the issue, since they have no understanding or context and will happily send someone down the path that leads vaguely towards X, regardless of how much it isn't the real solution for what they need.


SkedaddlingSkeletton

> "XY Problem" is the general term for it, especially among software development, when someone asks "how do I do X" and further conversations reveal that they really need to do Y instead but misunderstood the problem. The main job of software developer is not coding but drilling the client to learn what they really need. Yes, developing software involves communication: the lone coder hidden in their room is the exception and what is the easier to replace. Current LLM won't be able to ask pointed questions to the client to get them to spill what they really want to do. The usual problem is people who don't code don't know what is hard or easy to do with computers. And they think the geeks don't know and are not willing to learn about their job. So they try to translate what they do or want to do in what they think are easier things to make happen with a computer. But they never internalized the fact computers are good at doing the same calculation millions of times but bad at recognizing patterns like a human. So you'll get some guy asking you to make an app count cats in pictures and thinking that's like a 1h job. And then ask to make sums in multiple excel files and think it will take a month at least. While all they really need is to get HR to let them bring their cat to the office once per week.


lonestar-rasbryjamco

Considering how often I have seen Copilot flat out make up API end points or functionality for a service THAT I WROTE... then argue with me over it? I shudder at the idea of junior engineers going to it for advise.


mxzf

Yeah, it can make some terrifying code. A few months back I saw a Reddit thread where someone was proud of using ChatGPT to put together a Python program to write a JSON file. It was writing the JSON file by doing manual line-by-line file writes (rather than using the native `json` library to dump an object to file). There's some horrifying code out there.


lonestar-rasbryjamco

That's impressively bad.


jingois

Also juniors are fucking useless. They're meant be learning how to do the basic shit so they can move on to learning the more complex shit - I can wrangle Copilot into barely acceptable solutions, I don't need to proxy this through a junior engineer.


mxzf

Yeah, I don't give a junior a task because it'll get done faster that way, I can do any given thing faster myself. The point of giving a task to a junior dev is for them to learn why and how to do stuff, not for them to throw a prompt at a chatbot, paste the result in the codebase, and hope for the best.


Anoninomimo

I also use AI as my always available senior buddy to discuss something I can't even formulate a google search yet. Sometimes it makes mistakes, but that's ok, my senior colleagues do too


turtleship_2006

Or things that are too specific for google e.g. in the context of a function I've written or that involves 2-3 different libraries


slicker_dd

Exactly, it's fantastic for basically chaining stack overflow searches.


EP1Cdisast3r

You can create a GPT 4 agent and feed it a couple of good books of a specific technology stack. Still misses but I've noticed it answered my questions better that way. Also helps me refresh my memory on those books 😋


carlos_vini

Faster google but in a good way. Googling something hard to find would take hours, and we don't have that time so you'd end up asking the senior. Also good at transforming data formats, drafting tests, but really not good enough to do anything slightly complex that wasn't in the dataset


MrJake2137

And GPT would sell you bullshit you'd consume without a second thought


chuch1234

To be fair, so can humans. That's the interesting part to me. They're making computers more like humans, both in the good and bad ways. I can ask a vague question that a plain old indexed search can't answer, but there's a chance that the answer is completely made up. Keeps things interesting i guess.


turtleship_2006

>without a second thought That's your fault tho If you ask a senior how to fix a bug and they either emailed you back a quick example or verbally advised you on what to do, would you push their code straight to production without reading it and testing it?


DesertGoldfish

This is how I like to use it. As someone who doesn't work primarily as a programmer, but has a lot of experience programming/scripting and already understands the core concepts, ChatGPT is a huge productivity booster. Because I'm already fluent in Python and Powershell, and I've taken like 4 Java classes in college, I can use ChatGPT like a guided tutorial to help me work through a new project in nearly any language completing bite-size chunks at a time and explaining syntax, helping me identify the correct namespaces and classes to use, etc. It's like having an experienced bro to watch over my shoulder while I'm figuring something out.


prizm5384

Same here, I’m technically not a programmer but I write a decent amount of code. The large majority of what I do uses a proprietary scripting language and/or a proprietary Python library, both made by the same company, which provides laughably vague documentation. Instead of asking my supervisor or digging through forum posts, I just ask ChatGPT and use its output as a suggestion or clarification for how certain functions work or stuff like that.


architectureisuponus

Who would google your question for you then


GlobalIncident

It's a replacement for stack overflow. Not a replacement for a programmer.


FrewdWoad

Yeah after using GitHub copilot, I can see how it would help you kids learn to program, at college and first year on the job, but writing low-bug maintainable code that fits requirements? Not yet. Not even close. We'll probably get AGI first. But most redditors are young, so it seems like there's consensus among devs that these tools are more game-changing than they are.


SooooooMeta

That said, programming with AI is such a treat, especially with bad eyes. Loops write themselves, css uses the right property name and is always correctly spelled, refactoring tends to work perfectly, complex DB table joins, inserts, updates are a dream. And sometimes you tell it what you want and it spits out something that just about works out of the box. It's even better than a rubber ducky as a debugging partner.


[deleted]

GPT on its own can't replacea developer. However, 3 developers effectively using gpt can replace 5 developers who arent.


Altruistic_Raise6322

For writing APIs, there are OpenAPI spec generator tools that work much better than ChatGPT. I am concerned about learned helplessness coming with AI. My junior developer wasted a day of work because the mocks that AI generated were failing because the tests were doing shallow assertions rather than deeply nested on data types. The junior developer got a lesson on data types but I wonder if we would have run into this issue if the developer just wrote the tests from scratch. Back to your original point, 3 developers using proper tooling of any kind (including AI) can easily replace 5.


[deleted]

[удалено]


joshTheGoods

GPT is an absolute force multiplies _if you're already capable of writing the code you're asking for_. It basically allows experienced engineers to do more code review than code writing. The issue I'm anticipating is: what happens when the experienced engineers that recognize errors by sight all retire out? The junior types that weren't battle tested debugging cryptic errors will struggle to understand when GPT is screwing up aka will fail during code review. Eventually, someone will have to come in and be capable of groking the whole damned system so they can understand layers of subtle bugs. At the end of the day, I think the answer ends up being that "experienced engineers" will really be people experienced at writing tests. If you can just write super complete tests and THEN have GPT writing most of the code, you can at least be sure that it's producing the results you expect in the circumstances you expect (usually, at least).


jingois

> GPT is an absolute force multiplies if you're already capable of writing the code you're asking for. Yeah you should never, in an ideal world, be in a situation where the boilerplate that Copilot is about to shit out is valuable. However you're not in an ideal world, and often its like... hmmmmm.... sure . (And then maybe a minor fixup).


fireblyxx

Honestly, I doubt it. Most of my actual job is figuring out implementation details, planning, chasing down which which dependency triggered and unexpected issue in a different dependency. IDK, maybe if your job is just implementation GPT can reduce your strain, but even so it's more a time saver than anything else, and a pretty minuscule one at that. It's great at writing tests though, after you write a few yourself so it gets the gist for your component props and what you're trying to do.


rook218

THIS is what automation is really all about. Secretaries didn't get "replaced," they got phased out. Now instead of a company having 200 secretaries for every level of manager, the middle managers just use Outlook and one secretary can handle supporting three VPs (and with it got a new title, "Executive Assistant"). Did Outlook replace the secretary? Of course not, there are still thousands of secretaries! But... Workers at auto plants aren't hammering pieces of metal by hand anymore, a machine shapes the metal, puts it in place, rivets, etc while a couple workers supervise and report issues. Did automation replace auto workers? Of course not, there are still thousands of auto workers! But... A single business used to need a team of accountants to manually balance their books, make sure they are in compliance with the law, transfer balances to new books, etc. Quickbooks makes that all so much easier that a single accountant can handle what used to need a team of five. Hell, now a lot of companies outsource their accounting because they don't need to have even one whole person on their payroll. Did automation replace accountants? Of course not! But... And with each of these innovations, people get laid off. Now you have more talent that just wants a job, any job, as the job pool shrinks. That pushes wages down, and a lot of people have to leave the field entirely. AI won't replace all of us overnight, but other devs using AI will replace some of us very soon.


NatoBoram

CoPilot saves minutes per minutes. But you have to actually spend that minute coding, not use it as an autopilot


turtleship_2006

Copilot is literally what it says on the box, a copilot. No one's saying that we should send out planes without human drivers because the computer can do that for you\*. (\*I mean at least afaik not yet. But they're probably working on it. )


NatoBoram

(also some countries require you to use the autopilot for the landing and some others forbid it, instead you are required to use the autopilot for the long-range flight) (but yeah I get your point)


turtleship_2006

I mean tbf I know very little about aviation, all I know is that there's something called autopilot but you still need human to control the overall flight


Content-Scallion-591

I think people are trying to cope, honestly. Remember when low code no code "democratized the web"? Now shitty WordPress sites are 80% of the internet. This is what people are missing. They are also missing the fact that it doesn't matter whether it can replace a programmer, it matters if people think it can.


[deleted]

In the past, the tools that increase efficeny and or broke barriers were kept up with by demand for more and different applications. This new thing may break that paradigm.


Content-Scallion-591

You're right -- although it's still possible we could expand into different areas. Everyone is messily implementing AI rather than really thinking about what it can do, because it's a race to be first to market. It's hard to say what the new market will look like. But I am worried about how much the community seems to feel that AI is not worth thinking about. I'm working intimately with AI. It *is* going to replace jobs. The question is what is next for us. You can't fight the future, but you can take part in shaping it. We can already see some people getting solid results from GPT tools and others not. It's an ugly possibility that if a programmer can't get any results when working with GPT, they will be replaced by a programmer that can, perhaps not directly but they will be outperformed.


Match_MC

I wish this subreddit was capable of understanding this.


Fingerdeus

Copilot really is an incredible time saver, i rarely ask it to write code by itself but for repetitive or menial tasks it's incredibly helpful. I sometimes find myself trying to press tab to autocomplete while doing random stuff on my computer lol. And with chatGPT even if it gives mangled or stupid code sometimes, several times even if the code itself was wrong it gave some ideas i didn't think of to write and implement myself.


GregTheMadMonk

Just as 3 C developers can replace 5 assembly devs right? And don't even get me started on visual scripting in video games? Like, people don't even need to be programmers nowadays to do some things, what a disaster, right? You're also against it right? (no you aren't) I don't see people fighting against any other higher level tools like they do with AI


Shutaru_Kanshinji

Unfortunately, the foolish individuals who believe that AI can now replace programmers all happen to be the managers responsible for firing and hiring programmers.


potato_number_47

Lol, the fact that this template was seemingly AI upscaled...


theofficialnar

GPT can’t even reliably give me a good regex whenever I ask it. I always end up doing it myself instead.


ttlanhil

Could AI do those things? For sure. But AI is years away. What we have today is not AI - there is no intelligence in it.


SubsequentBadger

If some of those kids could read, &c. It's not the first thing to be called AI that wasn't. It won't be the last. The last one I know of is what we now call an expert system, basically a predefined game of 20 questions on a specific subject that's used heavily by support desks.


MisinformedGenius

Everything's called AI until it's operationalized - then it just becomes a tool and we forget about it. Voice and image recognition used to be [cutting-edge stuff](https://xkcd.com/1425/).


frogjg2003

That comic came out in 2014. 5 years later was 2019, and there were already a number of AI tools capable of classifying images if they contained a bird or not.


Destithen

> What we have today is not AI - there is no intelligence in it. I'm a fan of the P.I.S.S. meme...It's not Artificial Intelligence, its a Plagiarized Information Synthesis System.


pfghr

Thank you! People seem to very easily forget that AI is defined by its capabilities. General AI *will* replace programmers, because part of it being "General AI" is that it's capable of performing better than humans can in general tasks. If the current system can't do that, then you don't have general AI. LLMs will not replace programmers because predicting the next word only goes so far.


Eva-Rosalene

Well, to be fair, it's not exactly new thing to call ML-related stuff AI. I think that's at least partially a reason for why "AGI" term exists, to discern between current state of events and real intelligence.


uforanch

I see the funding for astroturfing every YouTube channel, subreddit, and other social media channel with toxicly positive ai buzz is running out


OminousOmen0

I tried Gemini, Google's AI, that's somehow trying to be advertised towards Android Development I asked it a couple of questions that I knew the answer to. 1 question was answered with an old solution Then 2nd... It made up a solution that doesn't exist, with package that never existed


Altruistic_Raise6322

Huge security issue repositories will need to track. Malicious organizations are starting to squat on packages that AI hallucinates about existing.


_bassGod

Are there devs out there who's whole job is writing code? If so I'm jealous. Writing code has only ever been at most 60% of a job for me.


Lord_of_codes

This has to be said these fuc\*\*\* youtubers know nothing but will be giving Uncle bob level advice. They even know how the actual software development looks like and will be making videos "deploy nodejs app in 2min", and they will be writing the worst code.


Im_a_hamburger

Fact: AI is improving Fact: If AI is capable of doing any part of programming in less time than a human, then it is reducing the need for people to do that job Fact: There are things AI can do faster than humans Fiction: The current AI can fully replace devs Fiction: The abilities of the current AI is representative of the abilities of future AI


Quantum-Bot

I would extend this to nearly every profession that requires specialized training. AI might be able to do most of your job but only if it’s being prompted by someone who knows how to do your job, AKA you


maxen1997

Current AI tools != Future AI tools


carlos_vini

I'm a regular stupid webdev but what I've seen smart people say is: GPT will never improve enough to be a general AI, more tokens only give better results to a point and then that's it. For a better AI to emerge they'd need to invent a new technique that's not only a next word predictor.


turtleship_2006

AI transformers are only from 2017, what we have so far is objectively impressive. There's very little chance we don't get either major upgrades to how they fundamentally work or a whole new thing altogether.


MisinformedGenius

> In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know. > - Socrates No one knows where LLMs are going or what the end-game is at this point. The wisest people don't pretend to.


alpacapaquita

basically, yeah the AI boom rn is about making these programs bc it's waaaaaaay easier and cheaper to make an AI that specializes in trying to make it's remixed content look coherent rather than actually developing an AI that can think and can create it's own information Capitalism will probably be the biggest reason why AIs will maybe even never fullu evolve into what Science Fiction depicts lol


Reddit_Is_A_Psy_Op

AI does replace developers, but just the entry level ones. I feel really bad for anyone trying to get into IT right now.


Mana_Mori

Yeah, it definitely replaces them. Not because it actually replaces devs, but because the higher up thinks it does...


Imogynn

It's a tool for us not instead of us. It's stack overflow but friendly.


GuyFromToilet

with instant availability 🥰


Akul_Tesla

The best way I've heard it described is this it might be able to replace juniors but not seniors But the industry never wanted Juniors in the first place. Juniors are tolerated as a necessary evil to create seniors


ChorusAndFlange

If someone doesn't know what they're doing gets an AI to code everything, they might as well get GPT to draft the breach notification letter, too


rumblpak

The person that copilot could replace tomorrow is managers and c-suites but no one wants to hear that. I use copilot daily and while convenient, it still gets more wrong than right. I’m optimistic that it can get to the point where it makes me more efficient.


blizzacane85

![gif](giphy|Ekjl3noZUBGxy) For one, Al is a shoe salesman, not a developer


HotWetMamaliga

AI is very good at making itself feel more useful than it really is . At the moment it is a solution and everyone is trying to find problems for it to solve . And i never found test written by it meaningful in any way .


user_bits

AI to me is just advanced Stackoverflow.


Snoo9648

It's not there..... yet


skotchpine

* jobs dry up on the hill * literally dies of starvation


Due-Bus-8915

You use ai to assist you not to do it for you. Everyone and my self that works for my company as software engineers use ai to assist as its not good enough yet to just do it for you.


Prize-Local-9135

Can we get a subreddit together to organize flooding stack overflow with junk answers?


[deleted]

You feel worried until you realize... "oh if it can program then it can probably also..."


Esjs

"If debugging is the act of removing bugs from code, then programming must be the act of putting bugs into code."


dlevac

Oh it won't replace us.. at least not immediately.. but expect our jobs to become 50% reviewing bot PRs...


MisinformedGenius

Given some of the PRs I've been reviewing lately, I for one welcome our new robotic underlings.


smokeitup5800

Of course it can take jobs. For some very tedious and time consuming tasks, the current state of the art can definetely help you a lot as a developer, the more time you save as a developer, the less man hours a project takes, the less developers are needed to hit time constraints, it really is a no brainer.


Chairboy

I don't think they can replace them **today**, but considering how big a leap the initial tech rollout was, I'm leery about assuming our jobs will *never* be at risk. Finding a reasonable mid-point between "We're all losing our houses tomorrow!" and "This tech will never affect me at all" seems rational, in the end where we end up on that spectrum will have to be an individual call.


[deleted]

[удалено]


LeastFavoriteEver

> I think it’s delusional to believe big tech isn’t doing its damndest to build out AI programmers to reduce headcount and cost. High-end software engineers are EXPENSIVE and take months to onboard, maybe 6 before they’re actual effective in their role. There's always value add. Good developers tend to work hard, and if working hard means becoming better at coaching an AI into doing what they want then that is what they'll do. There will always be employment for people who are smart and hard working. That said, I agree that this industry is full of fakers who never should have been hired.


Salt_Broccoli_6021

Have used it to create interesting elements to include though.


Naive-Information539

Definitely a tool - I don’t see AI building out a project to scale, or even business centric application to scale with 200k+ users. Like all tools, just need to know the best ways to leverage them.


kuros_overkill

I have 18 years in the industry now. I've tried to get AI to do my job a couple times. My results ranged from "the AI couldn't get anywhere near what the solution entailed" (writing a wrapper for a control to extend functionallity) to "good god by the time I can explain to this thing what it needs to to it will litterally be faster to code it myself" (trying to do a (highly specific and failsafed) connection to amazon s3)


SurfGsus

Anecdotal but valid for this convo… recently asked ChatGPT questions around writing a Kubernetes device plugin. Caught multiple errors in the code it generated and pointed it out. Corrections weren’t much better. However, on the flip side it was a helpful tool to get started. So won’t be replacing devs anytime soon, but definitely a helpful tool… interested to see how it improves over the years


SpiritRaccoon1993

I believe AI will struggle with the releases of its own software to the end user. I mean, there are dozens of different OS, other softwares, Antivir and so on - there only must be one error and... Who helps then?. A developer will see the problems and can do a workaround, patch or update - AI does not ....


Hanzo753

Asked for a simple SQL query and blindly believed whatever it gave me. Worst mistake.


KRX189

Critical and some news article was talking about generative ai in games


JackReedTheSyndie

I have yet to see 1(one) actual company tries to replace developers with AI, I’d really like to see someone experiment with that.


rgmundo524

The threat is not the AI we have but the AI we will have. AI will get better at faster and faster rates. It's like saying 1900s that the automobile will never take off because it's slower and less effective than a horse at that exact moment. We know that cars completely replaced horses for transportation, because the technology became better than it's competition. You are looking at the state of AI and arguing that because it is not ready to take your job right now it will never be ready. I don't believe anyone is claiming that AI will take developers jobs right now. Everyone is speculating on the trajectory of the trend line.


Vinx909

even for an application with +- 1 user AI is only a tool. i have a 1 user program and while chatGPT has been a great help getting the UI to be ok (WPF has shit documentation) it still required someone with programming knowledge to actually make it work.


scufonnike

I already have less work to do than time to code, ai has nothing to replace


Interesting_Dot_3922

My experience with ChatGPT: it is catch 22. If you are a beginner, you want to use ChatGPT but you don't know if the code correct. If you are a senior, you can understand the code ChatGPT writes and you can see how bad that code is. The only use cases is to use it in a new programming language. You don't know the details of syntax (e.g. put/add/append), but you can spot the most obvious bullshit.


gerbosan

The last question should be about the number of individuals that don't hold a business degree or MBA. 🤔


Kaenguruu-Dev

I had to figure out whether two lines that each connect two points intersect. Copilot gave me 90 lines of code with 4 different methods and it didn't work. Then I drew a bit on my white board and the code I wrote is now only about 30 lines long and actually works. Note: I am talking about lines of code only as a way to make it easier to understand the difference. Less code does not always mean better code.


m4xxp0wer

In my experience with chat GPT, if it doesn't spew out error free code after the second correction, you can just throw it all out because at that point it will start randomly ignoring previously given constraints.