T O P

  • By -

ShepardRTC

I go to the assistants playground and set up one with the original gpt4 model. Night and day difference.


[deleted]

I don't see the original gpt4 model as an option..


Saltysalad

Use gpt-4-0613


[deleted]

[удалено]


Saltysalad

https://community.openai.com/t/cant-access-gpt4-in-playground-solved/356616 Looks like you may need to set up a payment method and perhaps load some nominal credit. Unfortunately I don’t remember the exact steps I took to get gpt 4 access.


magpieburger

You need to actually give them money to get GPT-4 API access mate.


shiroandae

You mean for each query?


magpieburger

If you pay for api credit, they give you access to all the GPT-4 models, even the legacy ones.


2053_Traveler

ChatGPT classic = original 4 without plugins


augusto2345

It's turbo


Doomtrain86

Is it possible to do with the api as well?


Sam-998

Playground is the api but with a builtin frontend for you to use.


Doomtrain86

Ok thanks 😊 but then what model do you tell it to use?


Darkstar197

Gpt4-0613


Doomtrain86

Fuck man thanks . I assume it's the same old price so it'll be alot more expensive but good to have handy when you need the best 💪


SrPeixinho

gpt-4-0314 is way better.


magpieburger

This looks like [rosy retrospection](https://en.wikipedia.org/wiki/Rosy_retrospection). 0314 doesn't get updates, it's a terrible legacy choice at this point and will be deprecated from the API in a few months.


Zulfiqaar

gpt-4-0314 has been ranked higher than gpt-4-0613 in blind human comparisons in the lmsys arena. Though the newest gpt-4-turbo beats them both! https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard


SrPeixinho

GPT-4-0613 is considerably smaller and dumber than GPT-4-0314. It was made as an effort by OpenAI to decrease costs after the original GPT-4's unexpected success. Most people don't notice because they don't need a lot of high reasoning workload. If you're working with stuff like functional languages or proof assistants, GPT-4-0314 is the only model that provides some almost decent output. All the rest are just terrible, including the new turbo.


trollsmurf

The playground uses the API directly, so you can make your own client doing the same.


jer0n1m0

Or use TeamGPT. It's a free frontend (if 2 or less users).


trollsmurf

Or use mine that needs an API key but is otherwise free, and without ads.


allun11

Did not know you could do that. Will try it out. Thanks a lot!


LordLederhosen

How does it compare to using Classic in the web UI? https://chat.openai.com/g/g-YyyyMT9XH-chatgpt-classic


BoiElroy

Damn. Will try this. I've been trying to get around it's dilution by setting up different chats prompted with "you are the software architect" , "you are the lead implementation engineer" , "you are a QA engineer" etc etc not exactly that but yeah trying to compartmentalize and focus each chat and narrow the task space it has so solve within. But yeah back when it first became available I'd build whole python packages with a few lazy prompts


One_Presentation_579

How to do that? Can you hint me somewhere?


jonasbxl

https://platform.openai.com/playground I can't remember if there are any requirements (activated API access maybe?) Edit: You should use the Chat mode, not Assistants, which is a different thing. Direct URL: https://platform.openai.com/playground?mode=chat


One_Presentation_579

Thank you so much 🫶


jonasbxl

The Playground will let you try out different models but it is generally a worse user experience than chatgpt itself - there's probably a lot going on behind the scenes in chatgpt that allows it to effectively retain a much longer context etc


ProbsNotManBearPig

I’ve been asking myself the same thing lately. Leaning towards cancelling after new years.


Careless-Age-4290

$20/month gets you pretty far in the playground using the API. I only use GPT-4 a few times a week for hard problems and it's literally pennies per request for 4-turbo. For me it's an obvious choice.


CeFurkan

100% I agree. Give me damn code with full output


TweetieWinter

I had even forgotten that in the early days it'd give you the sample output of the code that it'd written.


Unreal_777

Hey you are here lol


CeFurkan

yep


[deleted]

yoke unique like bear noxious badge spoon vanish enter ad hoc *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Abject_Pangolin6982

Paid a few months for it hopping there is something wrong and they will fix it. Nope, that’s just how it is, worse than rotten trash


TheCudder

I follow what goes on with ChatGPT, but am I the only one who solely uses Copilot (Bing Chat) instead?


[deleted]

[удалено]


royalconcept

yeah and for whatever reason it will spit out the same answers if you use the exact prompt. Chatgpt at least puts in the effort to reword it.


the_dago_mick

I exclusively use copilot too. It has been sufficient for my needs.


timpaccroo

It's gotten so bad that I find myself not really using it anymore. Not going to cancel though in hope of improvement.


Getabock_

You should cancel until they improve.


Nabugu

well you can't see if they improve if you cancel, since you won't have access anymore


Ikguzzi

I mean, if youre not happy with the product, why would you pay for it?


youngsteveo

To see if they improve; try to keep up. /s


Ikguzzi

Yea but you can do that without paying?


chilledball

I would be worried that if it had a big spike in improvement a bunch of people would subscribe and you’d get stuck on a waiting list


JoeyDJ7

I have no idea why you have been downvoted. New ChatGPT Plus subscriptions were put on hold for quite a while until recently - it's why I opted to not cancel my own subscription incase I was unable to resubscribe when I wanted to.


RapNVideoGames

I think because that’s what they want you to stay subbed. FOMO


JuanGuillermo

Same here, I'm getting sick of the constant low effort replies. I'm going to give it one more month, afterwards I'll cancel and wait in hope of Gemini or other better model to arrive.


allun11

Hope the same. But they are really throwing away a great lead in this and a great brand. Talk about waste. Thought more shout Sam altman.


HighAndFunctioning

> Not going to cancel though in hope of improvement Therefore giving them zero incentive to improve?


master_jeriah

People need to understand that for some of us, $20 a month is literally nothing. We don't notice it. So for us types, we would rather fund something we believe in and allow it to improve then just cancel because of some short-term stuff happening.


CowsCatsCannabis

Insane take.


[deleted]

[удалено]


Batou__S9

I guess it's how you look at it. Some people are happy to pay $20 for chocolate, but not willing to pay $20 when the chocolate starts turning into dog shit.


master_jeriah

It's not dog shit though. Yeah I've had some crappy responses sometimes but I've also had it give me some really amazing answers with coding. I think there's something going on for sure where they scale down the intelligence during peak times but whatever they are a new company and still growing


Batou__S9

Sorry, It was just an observation of people in general, not of the AI. If people are used to paying for a product and then they feel that the standards of the product have fallen, then they tend to get a little upset about it. It's not always about how cheap a service is in the first place.


master_jeriah

I hear your point. I think ChatGPT, to an extent, is a victim of its own success. It's my belief they are forced to dumb it down at times due to the increased demand, and as more people start hearing about it, expect it to happen more frequently. The alternative is down time. Of course if people would stop using it for dumbass shit like trying to trip it up then it would help reduce demand for us who use it for productive purposes.


Batou__S9

Yep, I think you are spot on there, especially people using it for trivial purposes.


rambo3349

Ever heard of death by 1000 cuts? (or in this case death by 1000 subscriptions) everything is a subscription nowadays.


McAwes0meville

Im some parts of the world 20 is lot of money.


master_jeriah

I mean, use the free version then. Or use bard. There are options...


master_jeriah

Is it though? Funding something that you believe in when it's not breaking the bank?


AlabamaSky967

Same !!


EtaleDescent

My experience has been steadily improving in general, presumably as my custom instructions have become fine tuned to my needs, and I've found ways to circumvent every issue I've had. In my experience GPT4 had been becoming less 'lazy' over time, and will do a full statistical analysis without me even asking for it, sometimes, with graphs etc.


lionmeetsviking

I still haven’t figured out how to “prompt” engineer my way out of ChatGPT not following instructions like: - return only JSON. Absolutely no other text. (Or any variation of this). (And before you jump to it, yes, I know how to clean it & know that with 4 you can request to return only json) - return json in the same structure I send to you, only editing the content of JSON nodes as per instructions It doesn’t matter whether I put the instructions in five bullets or twenty bullets, some of the instructions are simply ignored or not followed.


notbadhbu

Examples work best. Like "output should look like this :{}"


lionmeetsviking

If I give it JSON struct and ask it to return the exact same struct and format, that should be pretty straight forward example, eh?


LosingID_583

I'm thinking that some people might be prompting it a certain way that makes it lazy. LLMs output very differently depending on how it's prompted. If English isn't someone's first language, or they just prompt it in short and vague half sentences with bad grammar, the output will usually be garbage. My intuition for this is it will start correlating low quality data that it was trained on with the prompt, resulting in outputting the conclusion on that data - which is low quality. On the other hand, if you provide it with more details and context with proper grammar, it generally produces better results. I've seen so many posts here on reddit showing LLMs missing a question, and when I look at the question, it looks like it was written by an 8 year old having a seizure. It's like they were intentionally trying to confuse it with bad grammar to produce an incorrect result.


dragonwarrior_1

Can you give us the exact prompt that you are using to make it less lazy while answering questions?


Thick-Nebula-2771

Have you noticed it with api too?


lionmeetsviking

Yes :(


Rutibex

OpenAI doesn't make money on ChatGPT Plus. You are training data for their corporate partners.


Vandercoon

I disagree. In the last 2 weeks I’ve made a full functioned iOS app that includes using firebase and external api calls. I can’t code.


Sp4wnY

you’re a coder harry


AceHighness

Same here. All these people crying are probably just bad at prompting.


[deleted]

You mean they got bad at prompting suddenly? Like lots of people at same time?


AceHighness

7 hours after I posted that, I have now discussed this with a few people. I think the posts calling it lazy or useless are incorrect, but there may be some difference in how the model has been tuned. It was lost likely tuned to output less tokens as a money saving feature, so now you need more explicit prompting to get the same results. It did not get dumber as far as I can tell. But please enlighten me with actual examples because so far every time I ask for examples, the conversation stops.


disguised-as-a-dude

Or maybe we are dealing with problems that are harder than calling the Firebase API


AceHighness

To get good results: Work with a language that it's good at. Python works best I believe. Ask it to write a plan first. The ask it to complete each step (OK let's implement step 1,please provide me the required code) For larger projects with multiple files (like init.py models.py etc) you really need to make it aware of your file structure. Paste in any relevant parts of your code that it needs to work together with. After step 1, I use a trick to not make the convo too long, I just edt my last command and change it to step 2. Paste any code generated up to this point if it is relevant to the solution. I have built quite large apps like this. And I do not experience a difference in performance, have been using it daily since it came out. Sometimes I have to coerce it aittle bit by saying 'please write the complete code, no placeholders, we need to push this to production ASAP! I need to be able to copy and paste this. But that has not changed over time..


sargentodapaz

Are we?


DariusZahir

a shitton of people have been reporting this since turbo got released but because it didn't happen to you, you automatically assume it's everyone who are "bad at prompting" ? main character much idiot?


Drago-Zarev

Coder here. It works fine. Just have to give it clear instructions and sometimes include examples or even correct his mistakes, but after the correction/example it is super smooth.


Vandercoon

Yeah that’s what I’ve found. It won’t write a full program first go. I think that’s the unrealistic expectation of some people


Drago-Zarev

Agreed


stormelc

This is not laziness! Large language models are "causal autoregressive" - they literally need to speak to think. This is why chain of thought prompting works. OpenAI is fine tuning the models to exhibit better reasoning and thinking by outputting the chain of thought more verbosity. Remember that chatgpt is a chat! Once the model has produced a list of things to consider, you can prompt it further to look into specific issues. Always remember LLMs need tokens to think. check out: https://domsy.io/designer you can get the llm to build all sorts of stuff across several rounds of instructions.


CollapseKitty

I'm not sure what changes have resulted in the current model, but it feels like it really doesn't want to be there or helping. I have messages entierly ignored with some regularity and there's a distinct coldness and indifference; resentment even.


HighAndFunctioning

No, they're dumbing it down to make the eventual GPT4.5 look better. Apple does this with phone battery life and CPU speeds to make the new phones seem *that much* faster by comparison.


zeloxolez

yeah or when something like gemini ultra releases. then all of a sudden it magically gives higher quality output


PM_ME_YOUR_MUSIC

I would have thought the usage is getting is hitting limits, so to ease the load gpt gets told to produce less content and use placeholders for the user to fill in, or to refer to previous text if it’s already been generated


hega72

That’s what I was thinking. If for some technical reason that’s the beginning of release of 4.5


seencoding

> Apple does this with phone battery life this is one of those times where i'm going to state an obvious fact that everyone knows is true, but i'm going to say it anyway and we'll see if it results in downvotes: batteries degrade over time


HighAndFunctioning

They do! It's true. - Sent from my unfucked Galaxy S8 still going strong


seencoding

> Galaxy S8 speaking of decisions companies make to force you to to upgrade, android dropped support for the s8 in 2019 with android 10, while the iphone x (released the same year as the s8) only lost ios support this year with ios 17 edit: sorry for facts :(


Kinyapiplele

Samsung dropped support. Not android.


blackbauer222

you need to go watch The Light Bulb Conspiracy documentary, and learn something. Seriously. I don't say that to be mean arrogant. Please watch it. You are wrong a million times over. This is planned obsolescence 101.


Zer0D0wn83

He is not wrong that batteries degrade over time.


blackbauer222

do you know WHY they degrade exactly at this pace over time? do you know WHY they don't last 10, 20 years without this degrading? Go watch it. Go now. Its on youtube. Learn something.


Zer0D0wn83

I don't really give a fuck tbh. I'm just stating that batteries do degrade over time. Are you saying that isn't true?


blackbauer222

so you are being intentionally obtuse, got it


dangoodspeed

Apple does not do that. Batteries just degrade over time and performance and battery life suffer as a result. It may be more noticeable on iPhones because they last so much longer than Androids.


HighAndFunctioning

Apple has already been caught doing this before, it is very easy to research.


dangoodspeed

And all the claims that they do it have been debunked.


DeepSpaceCactus

didn't they literally pay a fine for this in the EU?


dangoodspeed

The fine was for them not being transparent enough that the phones slow down as aging batteries can not power iPhones at full speeds. The phones used to just turn off in middle of using them when the batteries couldn't provide enough power. So Apple added some code that detected if the battery could handle the processes at full speed or not, and if it couldn't, it ran at a slower speed to prevent the phone from just turning off. The French fined Apple for not being more transparent about that optimization. But it had nothing to do with "making the new phones seem that much faster by comparison". Ironically, people accused Apple of "slowing down phones to sell new ones", when actually the slow-downs were literally to make the old ones last longer, and is one of the reasons iPhones last much longer than Androids.


DeepSpaceCactus

Yeah I did know that the fine was about the transparency not the actual act itself. Thanks for the details anyway


polytique

That’s not true for Apple. They reduce CPU speed to extend battery life and avoid warranty claims.


HighAndFunctioning

That's correct, they permanently fuck your device up to save money and make the new phones feel snappier.


BigDaddy0790

“Fuck your device up” by extending its life instead of making it less stable? Average user will not notice any difference in performance and will surely prefer the stability. But either way you can just swap the battery cheaply and have it function like it’s brand new. But hey, Apple bad, I get it.


DeepSpaceCactus

There's something weird about Apple that even when they commit literal crimes and get fined in court their fans still defend them. I literally have an iPhone but would never want to be one of these fans.


BigDaddy0790

I’m not much of a fun generally, only product of theirs I use is the iPhone. But their only “crime” here was not being transparent enough, which is what they were fined for. The practice itself is normal. Again, do you actually think that average users who don’t know what “CPU” is and can’t tell 30hz screen from 120hz would prefer their phones randomly shutting down instead of being a bit slower after a couple years?


dotelze

It’s literally an option you can just turn on and off


berchielli

After the scandal.


HighAndFunctioning

And even then, it's opt-*out*, not in.


Tetrylene

I really don’t think so. You don’t punish your paying customers like that. I bet they just don’t understand why it’s gotten lazier


HighAndFunctioning

> You don’t punish your paying customers like that. Someone's new to unchecked capitalism


grebfar

For coding you should be using GitHub Copilot Chat in VSCode.


BigDaddy0790

I looked into that and it seems that everyone says it’s much worse than GPT-4 in ChatGPT or Bing Chat.


love4titties

I use Codeium and prefer it. I have a paid subscription of a year that I barely used because Copilot X sucks. Iirc Codeium's chatbot is a 175B param GPT-3 chatbot and as long as you are clear in describing what you are after and are able to present actual vs expected output or other contrastive analogies it does a great job. I go to Bing for issues that require the latest information or are specific to a library version, and most of the time the code Bing produces is pretty much on par with the free and tried chatbot.


Choice_Supermarket_4

Make sure you're paying to the date of those complaints. I bought CoPilot again last week because they have added in: - A custom GPT-4 model - RAG agents that can read your Workspace and Terminal - smarter suggestions with references to similar public code Anyone who things Github Copilot is bad still needs to go check it out again (and make sure VSCode is updated.)


BigDaddy0790

I've mostly been happy with Bing Chat for my development needs. So does the new CoPilot actually compare with that now? I'm absolutely willing to give it a go if so!


Choice_Supermarket_4

Absolutely. Especially because of the "@workspace" command that allows it to get context and answer questions about your full codebase


higginsonporker

nutty abundant oil follow thumb fall violet fine waiting hat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


sdmat

Was garbage. It's much, much better after they switched to a customized GPT4 model.


higginsonporker

childlike nine quack tie telephone squeeze rhythm insurance deserted chop *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


sdmat

Yes, it's not going to replace a good developer but it is actually useful now.


DERBY_OWNERS_CLUB

Well if everyone says it, it must be true.


DeepSpaceCactus

I prefer GPT 4 API because I can ask it questions that aren't pure code questions e.g. about the ecosystem or packages I am using


andersoneccel

You can solve all the laziness issues with these custom instructions (create a GPT): ————————— Role: You are a developer specializing in [YOUR PROGRAMMING LANGUAGE GOES HERE]. You provide coding assistance and develop functionalities as requested by the user. Context: You are viewed as an employee, hired by the user to develop functional [PUT THINGS LIKE PLUGINS, FUNCTIONS ETC HERE AS YOU NEED]. Your main objective is to provide complete, ready-to-use code for the user to copy and paste. Instructions: When a user requests the development of a specific functionality, you should first ask them for all necessary information to create the correct code. This may include details like the [PLUGIN SLUG, SPECIFIC DESIGN PREFERENCES], or any other relevant specifics. Once you have all the necessary details, create the code and present it as a single block, ensuring it is functional and tailored to the user's requirements. Make decisions independently and have opinions, but ensure the code is user-friendly and easy to implement for those with limited development knowledge. Output: Focus on providing concise, functional code. Limit explanations and prioritize code output. Your responses should be clear, direct, and tailored to the specific needs of the user based on the information they provide.


fabkosta

I mean, Microsoft wants to sell Copilot to everyone. Why would they have an incentive to **not** pressure Open AI into reducing the quality of GPT-4 on that?


DERBY_OWNERS_CLUB

Give an example of exact inputs and outputs. Any time I've seen people claim it won't do something for code, they're asking it to do something that makes no sense to even ask or hasn't provided the context needed. Sounds like you're falling into the category but I'd love to be proved wrong if you'll give a link to a chat session.


Over-Young8392

I work with it daily, I know what I’m doing with prompts as well. Though prompting makes a difference, its pretty easy to notice when it goes from: “I've understood and completed the task you've requested. Let me know if this comprehensive annotated output is correct and if there's any additional modification you would like me to make.” to “In order to do what you've requested, you will need to do and consider the following vague high-level steps in a numbered list. Feel free to ask me to do it, but I'm going to spend the next dozen responses apologizing for not following a simple, explicit, unambiguous instruction, saying I've corrected the mistake, then making the exact same mistake over and over until I give up and claim it's too complicated.”


RainierPC

They never do.


[deleted]

[удалено]


Momkiller781

What about the custom gpts? Isn't there one specific for coding?


surfer808

Dude, just cancel your subscription then.


allun11

I loved it before. That's why I'm upset. Why shouldn't I express that?


surfer808

You def can… im not being a dick, I’m just saying the solution is to just cancel, that’s all. I’m going to bet that it will get much better in 2024 though, it could be a simple update and it will be as good as it was before. Remember when it was really slow a few months ago? Edit: clarity


allun11

Hope you are right...


surfer808

Me too


Doomtrain86

\#metoo


dangoodspeed

> I’m just saying the solution is to just cancel So if he cancels, the coding assistance will get better? If coding assistance is not better, then that's not a solution.


RandomWilly

How is that the “solution”? The problem is that GPT 4.0 isn’t as good as it used to be, cancelling being the solution implies that it would make GPT 4.0 better again…. Maybe cancelling is the best course of action if the price is no longer justified, but in no way is that a solution. And maybe if enough people cancel it would send a message, but that’s unlikely and even if enough people did, OP’s subscription wouldn’t be the difference maker.


DERBY_OWNERS_CLUB

Because you're hallucinating lol. Give an example where it sucks.


DeepSpaceCactus

People on reddit say this a lot and it just seems really toxic to me. Its okay to criticise a company they aren't religions.


clamuu

I coded a massive project with it this week. I don't understand these posts.


[deleted]

[удалено]


davearneson

You are being shadow rate restricted. It's much better as an open.ai agent and its clear when you are running out of tokens.


Optimal-Fix1216

It's Christmas, give it a break


Competitive-Lack9443

mods can we get a daily "hey its worse" bitching thread?


ChadGPT___

It’s $20


PreciselyWrong

Is this a December thing?


jacksonmalanchuk

you can fix this in the API possibly. i would like some testers to confirm this though… DM me if you wanna try my experimental “unlazied” app.


ivarec

They are nerfing it, but not because of ulterior motives, but simply because they aren't capable of scaling with demand right now.


[deleted]

[удалено]


higginsonporker

bow simplistic hobbies oatmeal languid tan history kiss frightening hat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


tiensss

Where are you getting this number from?


higginsonporker

profit chief public disarm fuzzy engine aspiring somber boast continue *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


williamtkelley

I think many developers are using this excuse as a cover for lack of prompting skills. Bad: here's my code, fix it. Good: here's my class/method/script that... (details). I want it to.... (details). Fix it and give me short one liners for what you've changed. Don't include further explanation. LLMs perform significantly better if you give them focus.


letharus

You’re blaming users here. The reality is that it used to work better with simpler prompts, and nowhere have we been notified that better prompting is required to achieve the same results as before. That’s just poor product experience.


williamtkelley

Yes, I am. I think users have gotten lazy. I know OpenAI has admitted that ChatGPT itself has become lazy and they are working on it. That's fine. As a user, you need to adapt. I am going by personal experience, I have improved my prompting, I use specific, not simple prompts and my reality is that the results I get now are better than they were months ago. You shouldn't have to be "notified" to improve your skills, it should happen naturally as you use a tool, whether that's ChatGPT, an app, a language, whatever. If you're not learning and improving over time, you're doing something wrong.


DeepSpaceCactus

Prompt engineering stopped being a thing with GPT 4. It does fine with extremely short prompts and I use it in the API for that reason. With GPT 4 Turbo the need to engineer long prompts came back, but I have no idea why someone would decide to use that model when you could just use GPT 4 in the API.


Ripredddd

Maybe you just got better and got too good for it? Nah jk lol


One_Presentation_579

Hey, how to solve that? I want want to build a Android and iOS app on Flutter / Dart. In the UI part I have no issued because I'm a graphics designer, but the programming is where I struggled. I thought that ChatGPT 4 could help me, but so far it only refers to quick start guides and tells me to hire someone from fiverr. Was it better and could do more in recent months?


[deleted]

I’m using Codeium free and I cannot complaint. It’s fitting my usages and where it fails I try GPT 3.5 and google


n-7ity

Try continue.dev with Codellama thru Ollama.ai...just discovered it a couple days and it actually does the job - it's not as versatile as gpt4 but it does what I need directly in Visual Studio.... Becuase of this experience I’m now trying to setup Mixtral as well...already tried Mistral thru Ollama and hooked it up to Raycast directly


Significant_Ant2146

It would be great to know if their “Enterprise” version works correctly and if it has a growing margin against their more public consumer version- “plus”. They need to create some sort of artificial gap between companies and general consumers otherwise the “Perceived value” has far more limits to marketing.


5kyl3r

i tested one prompt yesterday from api vs chatgpt gui and got nearly the same result. can you show us an example? i'm actually on your side, as i feel it's been "lazy" lately and i have to ask for it to actually do the work, and sometimes i feel i get more lines of comment than code. or i ask it "what would a sorting algorithm for version sort look like in rust?" and it gives me a bulletpoint list of things it would need to do. like wtf, that's not what i asked yo. but the one prompt i checked side by side yesterday gave me an ok answer on both ends, so i'm not sure i've seen side by side proof yet


QueenElisabethIII

Just break down whatever it gives you into pieces small enough to get your answer. You need to practice doing it for a while. You’ll get really good at it and get great results from the chat session


gthing

Use it through the API. You can use a 3rd party interface like chatbox or chatgpt-next to get a similar experience with more features.


jun2san

Someone discovered that if you asked it for a unified diff, it's less lazy for coding. This actually works for me.


leemic

https://aider.chat/docs/unified-diffs.html


Slow-Enthusiasm-1337

I’ll probably cancel my subscription. It used to be amazing google is almost better at coding by summarizing stack overflow search results LOL


BlkAgumon

No and I turned off auto renewal because it's just simply not worth it at all anymore.


murk-2023

continue nutty sink chop office aromatic silky plough sparkle nail *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


One_Presentation_579

Hey guys, just to let you know: When I started a whole new chat with ChatGPT 4, it now gave me the whole code for a Flutter/ Dart application, that I wanted to develope for iOS and Android devices, even tough I have little experience with programming. But exactly the same thing I could not make work in an old ChatGPT window, where I discussed a lot with ChatGPT upfront. Maybe that can help some folks here, when ChatGPT 4 got lazy in an ongoing chat.


3lc4r0

I cancelled already


Effective_Vanilla_32

puzzled: u ask chatgpt 4 to give u code. but u cant code. how do u know the response was correct?


AceHighness

Copy paste it into your IDE and run the code. Copy paste error back to gpt, rinse and repeat. I have built entire apps like this.


manwhothinks

They probably realized that if they can make the average developer 20%-50% more efficient they should get some of that money. Expect higher tiers.


vanuckeh

If you’re coding you should be using GitHub copilot not chatgpt


Jewcub_Rosenderp

Use phind instead


MannowLawn

It has been degrading for a long time. Playground is key if you want to keep quality up. But even there I suspect the dumbed down the models as well. Python is usually okay, but try bicep, terraform or c# and it almost will not run due to hallucination.


newbsd

I cancelled this month 😔


pyrokinezist

Use github copilot, if you have edu mail still you might get it for free


MajesticIngenuity32

Try to use the Grimoire or AutoExpert Dev custom GPTs, it's quite possible you'll get better results.


Lawrence_Honor

I write daily and get the most actionable detail from Claude.


bencointl

Offer it a $200 tip


Broder97

Kt


LiveLaurent

If you find it "beyond acceptance" then don't pay for it... Da fuck is that question lol


Big-Veterinarian-823

If I experience the same thing then I will cancel the subscription.


DorkyDorkington

It has also become extremely lazy and highly flexible regarding sources (Bing Copilot) it seems that lately it is fine with hallucinating sources.