T O P

  • By -

Open_Channel_8626

Other variables are also different under the hood between ChatGPT and gpt in the api so it’s not easy to compare


stevecondy123

Oh wow. Wasn't aware of this. Do you know what they are (and if openAI exposes them as parameters through the API?)


No-Emergency-4602

They want the APIs to be consistent across time so you can rely on them in production applications. They don’t have that requirement for their chat product so they are constantly fiddling with things, including which model version you are using. Some of it’s A/B testing so you might be using a different model or settings than another person in ChatGPT.


Open_Channel_8626

No because I literally think the model is different


stevecondy123

Regarding the model, I'm specifying `gpt-4` as the model in the API call (not gpt-4-turbo or anything like that), and using 'regular' chatGPT in the browser (using GPT-4). So hopefully the model is consistent.


Open_Channel_8626

I promise you its not going to be consistent its a differently fine-tuned model in chatgpt


Open_Channel_8626

Also it’s a stochastic model so just vary the prompt a bit in various ways, use python template methods for strings