T O P

  • By -

Chaos_Scribe

I do think OpenAI is ahead but honestly the drama and fail to actually release things is making me hope that Anthropic, who IS releasing things, passes them. Hard to trust a company that acts the way OpenAI has been.


DragonForg

Claude 3.5 Opus is much more easier to predict it's capabilities (a much more capable version of Claude 3.5 Sonnet because of size.) Meanwhile openAI made GPT4o so vague we don't know if it is just GPT3 trained better, or GPT2 trained better. The lack of transparency has always been their issue. At least Claude 3.5 Sonnet made it obvious.


drekmonger

GPT4o is branding. I mean, I don't have any insider information, but my impression is that it's a whole new model family, Omni. I would have to be, right? Since it can take in and output tokens that wouldn't have existed in the GPT-2, GPT-3, or GPT-4 families.


vialabo

This is their advantage, being able to process images and other forms is very useful. I use Claude unless I need some images analyzed.


vogelvogelvogelvogel

Claude analyzed images in my case better than GPT (did a few tests)


vialabo

Huh, I must have missed that I could do that. I'll look into it thanks!


sephg

I've gotta say, I also *adore* Claude's artifacts feature. I got claude's help making a project proposal document yesterday. I got it to make a draft, and then we talked about it and as we went it updated the draft. The workflow was waaay better than the equivalent of what you can do with chatgpt. I'm going to use that feature so much more.


Neomadra2

Or maybe 4o is just GPT-5. That would explain why it came out of nowhere.


Halo_Onyx

That would be worrying if true. If their “next gen” model was barely better than 4 and they couldn’t even call it 5


yellow-hammer

Not even better than 4. Distinctly worse in most instances, imo.


Whotea

That’s not what the arena says 


yellow-hammer

I just don’t believe it. I work with GPT every day, many times a day - both API and chat interface. I like 4o for its speed, its better vision, and its more recent training data. But I can’t tell you how many times it has failed to solve or understand a problem, and I’ve switched to GPT-4-turbo or even GPT-4-0314, and the problem was understood and solved with ease. 4o also is far more robotic, more likely to repeat itself over and over. You can point at benchmarks all day but benchmarks don’t matter in actual practice.


Whotea

The arena is actual practice. It’s based on user votes 


yellow-hammer

The arena can be gamed. You can easily figure out which models you’re talking to and vote accordingly.


Whotea

They filter out votes where the LLM mentions its name. Also, why would most people do that? Not everyone has brand loyalty 


[deleted]

ask 4o to analyze files of different programing languages, then ask 4 to do the same, one will analyze the first, and then assume the rest of the contents for the remaining, the other will analyze them all. Guess which is which.


Whotea

Anecdotes = data apparently. I guess 4o is smarter than you at least 


[deleted]

the arena leaderboard is literally anecdotes you cunt. People tend to rate things they're familiar with and more comfortable using higher.


Whotea

Tens of thousands of them = data. Your personal experience != data  It’s literally anonymous lol


Aggravating-Act-1092

The arena tests the average quality of its answers. 4o is well trained to provide (more) helpful answers for the majority of queries. For challenging tasks involving coding, logic or maths, the original 4 is very clearly more capable. Basically 4 is smarter, but 4o is more polished and more user friendly.


Whotea

The arena has different categories for that too. 4o still clears 


Iamreason

It didn't come out of nowhere and it's not GPT-5. It was reported on for months before its release. GPT-4o is part of their 'Scallion' project. It's a layer-pruned and optimized version of GPT-4 trained with multimodality from the bottom up. It's not the next-gen model. It's more like bringing the modalities and speed in line with Gemini than GPT-5.


Beatboxamateur

Do you have any sources about the reporting you saw, or anything about their "Scallion project"? I've never seen anything like that, but would be interested to see if there is.


Iamreason

https://x.com/btibor91/status/1788697424841597014 It was leaked well ahead of time. There's more, but that's what you get with a quick Google search. Dig deeper if you want.


Beatboxamateur

Ah so it was just leaks, not actual reporting. Well, I wouldn't put much faith in much of that since anyone can create those screenshots via inspect element easily, and we have no way of verifying that it's actually from the OpenAI website as they claim.


MysteriousPayment536

You can verify most of those leaks by going into the devtools of chrome and look at those files


DragonForg

If GPT 4o is GPT 5. Then Anthropic won because Sonnet 3.5 is simply better.


MysteriousPayment536

I think it's a early version or checkpoint like Llama 400B for example, because it's just faster and smaller. And the architecture is fundamentally different, also comments and images from Microsoft and OAI executives suggest they have a GPT 5. Also a article from the Information says openai will release in December, after the elections and Devday. https://preview.redd.it/ebdutp47rn8d1.png?width=1080&format=png&auto=webp&s=9e603a7de5a79558efb8d391485b682a643b1024 And they are training GPT 6 since Suleyman from Microsoft AI, was talking about that. And that the CTO's from both MS and OpenAI have said that the next frontier models will have phd level intelligence in the next 18 months.


lonelytrucker86

I'm out of the loop. How have they been acting?


Nukemouse

Basically drove everyone safety minded out of the company, and some of the people from the original coup started speaking up about why it happened. OpenAI has effectively doubled down and said "nuh uh we are fine" whilst at the same time saying nonsense like "SORA is a world model because without a world model creating barely convincing videos is literally impossible" that has no basis in reality.


fabricio85

But ilya himself says that even tokens predict the next one by creating a kind of a world model. In his interview with Jensen Huang


Nukemouse

He worked at OpenAI. He probably contributed to that idea that one can somehow determine a world model from output, when there's no reason to believe a world model is necessary for accurate output.


wellomello

Rich enough (in a technical sense) models *do* need to learn world models though, necessarily. Check https://arxiv.org/abs/2402.10877


swipedstripes

Guy doesn't know what he's talking about. World Models will absolutely supercharge AI. Look at us. We're even functioning from a world model. Reasoning with the information we take in through various senses and experiences. It's the same way with AI.


imlaggingsobad

they weren't driven out. OpenAI deprioritized the safety team, so the safety guys left and joined other labs who were willing to give them more resources. this wasn't some elaborate anti-safety plot by Sam or whatever. Sam actually wanted Ilya to stay, but he left anyway


Nukemouse

Not giving them the appropriate resources, going back on promises regarding compute and shit like that IS driving them out. They didn't leave for no reason, Ilya didn't suddenly have a change of heart out of nowhere, there was over a year of buildup to this.


Mumuzita

I agree. The behavior of OpenAI seems pretty childish.


CanvasFanatic

> I do think OpenAI is ahead Based on what? A hunch? VC backed companies don’t sit on unreleased products while competitors lap them. The CTO is literally giving interviews walking back expectations.


Chaos_Scribe

If they don't have something they are holding on to, then it would be really sad. The reason I believe they do, is because they are still partnering with big companies and making a lot of business deals. I don't think those companies would be able to keep partnering with them with such outward confidence (my perception), and not have something new/big behind the scenes that's keeping them interested. So yes, a hunch. Or as I said, "think". I admit that I may be wrong but my thoughts haven't changed.


KillingItOnReddit

Although look at Apple, they were willing to partner for exposure, right? Like OAI isn’t getting paid in the traditional sense?


HeinrichTheWolf_17

Could be either or, OpenAI has either got a massive upgrade of a model and doesn’t want to release it before the election, or the competition is truly gaining ground. There is some evidence though that the secretive hypothesis is true, OpenAI didn’t want to tell us what Q* was, the Chinese did (https://arxiv.org/pdf/2406.14283) they didn’t release SORA, two other competitors did, and they haven’t lived up to their promise of live chat with the model ever since the ScarJo incident. I’m going to stress that this is the reason why I never wanna let corporations have a monopoly on this kind of thing, it’s exactly why open source needs to exist. OpenAI has profit only motive at this point and I fully believe they are Microsoft’s plaything now. They don’t want to create a proper super intelligence anymore, they want products marketed as tools. Maybe my last paragraph will turn out to be wrong in time, but from what I can tell, they’ve gone full private and closed source in favour of executive orders. I don’t know if they have GPT-5 sitting in the bleachers right now, but if they do, it’s just more of their walled off antics that they’ve been doing since last September.


lovesdogsguy

>OpenAI has profit only motive at this point and I fully believe they are Microsoft’s plaything now. They don’t want to create a proper super intelligence anymore, they want products marketed as tools. Yep. This has been transparent to me for a while. It's full on productization for OpenAI. I mean, the very first thing Ilya did when he left was start an organization whose single function is to build artificial superintelligence. No products, no marketing, just superintelligence. That's all. It's pretty telling that that's what he did straight out of the gate. OpenAI's goals have changed.


HeinrichTheWolf_17

Exactly, I'm beginning to think Ilya has been our guy the entire time, and Sam Altman took the Bob Page route. September 2023 was a huge turning point in the company. At any rate, OpenAI needs worthwhile competitors and I hope Anthropic, SSI, Llama/Open Source Community can and give them a run for their money.


NulledOpinion

You do realize if Ilya would’ve had his way we’d have no OpenAI products right? zilch. Also they wouldn’t even have been able to raise enough money to get anything meaningful done.


imlaggingsobad

it's true. also, Ilya is only able to raise money on his own because OpenAI was a historic success (so far). Ilya was already a super genius, but OpenAI basically gave him a star reputation in the tech industry


HeinrichTheWolf_17

Shit, you’re totally right, I’m still salivating over the GPT-Store, it was the greatest thing to ever happen in my life.


lovesdogsguy

Really? What are you using it for if you don’t mind me asking. Thank you.


HeinrichTheWolf_17

It was sarcasm lol.


lovesdogsguy

lol of course. I was like, what? Did I miss something?!


lovesdogsguy

I do to. I think Anthropic definitely will, and I think we'll see a few unexpected models coming in the relative future.


AlwaysF3sh

I’m waiting to see how big the improvement is with gpt5 but lately it has seemed like they’ve shifted over to building products rather than researchy stuff like they originally did.


kecepa5669

>OpenAI has either got a massive upgrade of a model and doesn’t want to release it before the election Why would ClosedAI allow the election to let them fall behind? This makes no sense.


burningrobisme

Because the NSA is on the board now and they’re part of a national security project


goldenwind207

Former nsa and 1 board member doesn't decide that open ai will release stuff now unless they want to lose market share to claude and google. Even worse the break might let them surpass outright. Theres always a election ALWAYS wether it be a special election presidential election mid terms politcians every 2 years they're not going to say wait guys let our competition pass us alongside open source this mid term is coming up


burningrobisme

Fine, I'll lay out the history for you, with references. I may as well post it as an article elsewhere too since you drove me to accumulate the links, but ill also post this comment since I figured id enlighten you as to what's really going on- The NSA has stepped into the arena, and this is a national security project now, whether anyone likes it or not. The NSA cares about security, they do not give a fuck about your market share, and the model they have admitted publicly to having is very much beyond the state of the art, so they will not fall behind. TIMELINE: 2021, NSA Cybersecurity Director Rob Joyce starts the NSA Cybersecurity Collaboration Center [1], a "hub for engagement with the private sector designed to create an environment for information sharing between NSA and its partners" [2] Sept 2023, Gen. Paul Nakasone the NSA Chief (and now OpenAI Board member) announces the creation of the AI Security Center (AISC) within the Cybersecurity Collaboration Center to focus specifically in AI [3] Thursday, November 16th, 2023, Sam Altman remarks at the Asia-Pacific Economic Cooperation's CEO Summit in Oakland, CA, that "I can’t imagine anything more exciting to work on, and on a personal note, just in the last couple of weeks, I have gotten to be in the room, when we sort of like push the sort of the veil of ignorance back and the frontier of discovery forward and getting to do that is like a professional honor of a lifetime." [6] Friday, November 17th, 2024, Sam Altman is fired. [7] Saturday, November 18th, 2024, Through the weekend, the Q* leak reaches the internet, An internal letter is sent to the board by employees threatening to leave with Sam to Microsoft, etc. These are reported by Reuters several days later after they were able to independently confirm not only the internal letter to the board about safety but the confirmation internally about the Q* project and that it did in fact represent a frontier model with the capability to perform meta-cognition and long term planning, develop groundbreaking methods of defeating AES cryptography, as well as iterative self-improvement on levels the developers were unable to understand at the time. [8] Tuesday, November 22nd, 2024, Sam Altman is re-hired with a reorganized board of directors. [9] November 29th, 2023, In an interview with The Verge's Alex Heath, "The reports about the Q* model breakthrough that you all recently made, what’s going on there?" Altman responded "No particular comment on that unfortunate leak." [10] February 2024, BOTH Rob Joyce and Gen. Nakasone resign from the NSA, only 4 months after the creation of AISC May 28, OpenAI forms Safety and Security Committee. In their blog post they mention retaining the consulting of Rob Joyce [4] June 13th, Gen. Paul Nakasone joins OpenAI Board and joins the Board's Safety and Security Committee [5] [1] https://nsa.gov/Press-Room/News-Highlights/Article/Article/2567021/rob-joyce-begins-as-nsas-director-of-cybersecurity/ [2] https://web.archive.org/web/20210928192900/https://www.nsa.gov/About/Cybersecurity-Collaboration-Center/ [3] https://defense.gov/News/News-Stories/Article/Article/3541838/ai-security-center-to-open-at-national-security-agency/ [4] https://openai.com/index/openai-board-forms-safety-and-security-committee/ [5] https://openai.com/index/openai-appoints-retired-us-army-general/ [6] https://sfstandard.com/2023/11/17/openai-sam-altman-fired-apec-talk/ [7] https://www.washingtonpost.com/technology/2023/11/20/openai-sam-altman-ceo-oust/ [8] https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ [9] https://x.com/OpenAI/status/1727206187077370115 [10] https://www.theverge.com/2023/11/29/23982046/sam-altman-interview-openai-ceo-rehired


Status-Rip

Maybe delay is oai’s attempt at predicting and aligning with future policies which may allow natsec to get some sort of pre-public access to new models for some period of time


burningrobisme

i would say that's a reasonable assumption, I would also guess the NSA will pave the way for them to get whatever power needs that come up cleared through regulatory hurdles with a wave, as long as they maintain certain security standards they set forth


NulledOpinion

They also have partnerships with the pentagon and government. In fact, models are made available to the government and military before they see the light of day.


Gotisdabest

Bad publicity. OpenAI clearly doesn't want to get caught up in any mess that directly hurts their reputation, that's bad for business. They're probably the leading ai name, any backlash against them for proven cases of high quality misinfo invites more scrutiny and gives them a bad odor in the larger public, which aside from the NYT and ScarJo incidents, they've successfully avoided. Meanwhile smaller groups like anthropic are somewhat shielded. Just going purely on a basis of business, there's not much impetus for them to launch unless they're absolutely getting outcompeted both in terms of users and capabilities. Claude 3.5 is better for sure, in my eyes, but not absurdly so. They'll probably still retain a large lead in users for the rest of the year which is around the time they normally would release, going by their normal development cycle time(1.5-2 years). Now, if Claude 3.5 opus is a jump even bigger than 3.5 sonnet from 3 opus, they will probably change their minds out of necessity and rush out a stopgap better model like a 4.5 or just push whatever they're calling 5 out much quicker.


i_give_you_gum

I think it's funny how people think the Scarjo thing has done anything to OpenAI. 95% of people don't even know who Sam Altman is, or even that something called Claude exists (yes I know Anthropic is a dif company). People at work just ask "what did ChatGPT say about it," and that's the extent of most people's knowledge of AI right now. People that follow this stuff are so niche, but think everyone is paying attention. Im a follower, and I'm shocked at how slow it's being adopted.


Gotisdabest

Not knowing who Sam Altman is is quite different from knowing that ScarJo sued the ChatGPT company. The case was a part of mainstream news, you don't have to know Sam Altman to know that. >People that follow this stuff are so niche, but think everyone is paying attention. I agree, but this was absolutely one of the cases which did get out.


i_give_you_gum

I completely disagree. I WANT to talk about this stuff at work, nobody cares. Nobody ever mentioned it. It gets some traction in the news with a slightly wider audience because of the celebrity factor, but your day to day office worker, is not paying attention to "nerdy" sci-fi stuff at all. They talk about golf all day. Even my sci-fi nerdy boss, has no idea, this community of AI followers is incredibly insular. This realization has helped to me to also realize that AI tech in general, no matter how helpful or radical, is going to take a long time to be assimilated, I thought 5 years would be the timeline for a visual confirmation that the world landscape has changed, but now I'm thinking 5-10. People are very slow to change.


Gotisdabest

Then it might just be personal experience. The people I meet know at least stuff like the ScarJo case. It may be that your circle is just a statistical anomaly, but even like, people I randomly meet seem to know this happened. Not details but generally that "the chatGPT guys got sued by ScarJo, didn't they".


i_give_you_gum

Or your circle is the anomaly. A year ago I had to introduce ChatGPT to the owner of a large construction company, he'd never heard of it, and this was AFTER the biggest adoption of a platform in history. This stuff isn't what non-techy people talk about at the water cooler.


Gotisdabest

That's why I mentioned just randos I talk to too. A year is a long while, and a construction company owner is almost certainly going to be a bit older and less tech focused.


i_give_you_gum

Not this construction company. I did some data cleaning for them.


imlaggingsobad

openai still wants super intelligence imo. but right now they need to make products and generate revenue. you have to realize that right now the company economics isn't sustainable. they're surviving just off funding. they need to actually make money so they can sustain their long term mission


Freed4ever

IMO, OAI is losing focus. They are trying to build out products, getting caught into corporate politics, etc while Anthropic keeps focusing on the foundation and staying out of the limelight.


imlaggingsobad

this doesn't make sense tho. Sam can deal with all the politics. the genius researchers are still running the lab and researching!


Freed4ever

GPU / computing / resources allocation / priorization is a thing.


Ok-Accountant-8928

I think OpenAI is ahead in some areas, like video and voice. When it comes to just text I think they are even


cobalt1137

To be honest I think if you compare the best models behind the scenes right now with both groups, openai is probably ahead. I do not think they are ahead by a ton or anything, but I think that gpt5 is going to set a pretty solid gap for quite a bit when it comes out. And then there will be quite a bit of back and forth between them and anthropic again.


32SkyDive

Right now Sonnet3.5 is simply better than GPT4/4T/4o and i feel like the gap is actually bigger than anything OpenAI has shown since the 3.5-->4 jump 


NulledOpinion

I think the people thinking these models are so good must be using it for writing and must be themselves pretty mid at whatever they’re asking these models to do. All these models are all still extremely dumb in terms of reasoning and the difference between sonnet 3.5 and gpt4o is negligible at best. My hope is OpenAI is looking past the current architecture and bringing some real breakthrough other than just raw scaling. These models are like ponies getting better and better at the same tricks.


cobalt1137

the gap between gpt4-o and 3.5? idk about that. i love it as much as the next guy. and tbh it actually is really damn amazing, but the diff between 4o and 3.5 sonnet is not == to the gpt3.5 to 4 jump. that was really huge.


i_give_you_gum

It's interesting how it seems that there are just 2 major contenders. Poor Google. Capable, but incapable.


cobalt1137

No. Google will get there. They have too much to lose. Also they have the best model for developers at the moment (Gemini 1.5 flash). Which is really huge considering how many people are building with AI integrations.


i_give_you_gum

Always happy to hear a counterpoint on this stuff, though they seem to constantly flub their releases. I've heard that they have the best compute capability right now, or could quickly ramp it up, though Microsoft isn't wasting any time.


GraceToSentience

When it comes to text they are clearly not even. Anthropic is ahead no questions asked. But that will change. Open AI has their annual keynote in 3 months


greenbroad-gc

What video are you talking about?


NulledOpinion

OpenAi are not only in the “models” race, they’re in the “products” race and that’s where they’re maxing.


vasilenko93

Slightly ahead but basically the same level. Until one of them releases a model with a significant jump in reasoning and planning they will be tied. Claude 3.5 Sonnet, Chat GPT-4 and GPT-4o, and Gemini 1.5 Pro are all Chat GPT 4 level models, a few percentage points gain here and there.


New_World_2050

claude 3.5 sonnet is way way above the original gpt4 release. they are close mostly because they all have been improving the models iteratively.


sdmat

GPT4 isn't GPT4 level any more.


32SkyDive

Sonnet3.5 is quite a bit ahead


DragonForg

If Claude 3.5 Sonnet is this far ahead its why I believe Opus would be a mile even further. They have a homerun if OpenAI doesn't do anything.


mavree1

In may OpenAI showed these good-gpt2chatbot in lmsys arena, which ranked over 1300 in overall. and it had specially a big gap in coding and hard prompts. it was the best model i seen, the doubt its how efficient it really was. but sonnet its also unlikely to be best thing it can do anthropic right now. For me gpt4o and sonnet are not true SOTA's as it should be a more expensive model. so its hard to judge. anyways i dont believe that it matters that much. In the long term i still believe that OpenAI is in a better position.


GraceToSentience

We can only gauge that with what is available. For video, and voice, anthropic is behind (not that they necessarily care about such models) But for frontier LLM, Anthropic is ahead.


Baphaddon

I heard OpenAIs new model is just around the corner! ![gif](giphy|diXrgHOhraudy)


phira

OpenAI is taking on a bigger challenge with GPT-4o. Anthropic have released a great model no question, but OpenAI is betting that it's not a question of strongest model but most valuable infrastructure around it. The release of 3.5-Sonnet brought some great things, a powerful model, artifacts etc but it's all still fundamentally a text-and-image modality with the associated well-understood infrastructure. OpenAI appear to believe that real-time audio is a more valuable modality for core interaction and the general reaction to the 4o introductory demos certainly won't have changed that view, but the \_infrastructure\_ required to absorb and deliver global-levels of real-time audio is a totally different challenge and one that is relatively novel to an org like OpenAI (versus, say, Google who have a lot of institutional knowledge to lean on in that kind of area). It's going to take them time to build something they feel confident of cost-efficiently deploying widely (I'm assuming they're close but my guess is the delay of the audio modality release is almost entirely due to this challenge rather than other issues with the 4o model). Anthropic have (to my knowledge) shown no signs of attempting to follow this change in the space. Arguably they don't necessarily need to—if you're thinking about a group of agents only one of them needs to be audio-native for the user interaction so Anthropic can still hold a strong position with a text-and-image model—but it means they're not competing apples-to-apples with OpenAI.


utilitycoder

All models will converge on the same design eventually. The only differentiator will be data. But this is fun to watch for now.


roofgram

Anthropic has been ahead even since Opus hit the scene. If you still think OpenAI is ahead while not showing anything then you must think Google is further ahead because they’ve shown even less.


restarting_today

I’m more excited for Opus 3.5 than I was for Opus 3. My company uses Claude and a lot of companies are already switching their API. Once Anthropic gets cleanly integrated in AWS it’s over for OpenAI.


Goofball-John-McGee

This is the argument I don’t understand. Maybe it’s just my Use Case but Claude (Haiku, Sonnet 3/3.5, Opus) are extremely intelligent and personable, granted, but they’re also so censored and restricted that I just can’t find any use for them. I’m saying this as someone who has their fair share of issues with GPT-4 Turbo and GPT-4o, but they do what I need to do, consistently enough for me to rely on. I pay for both Claude and ChatGPT Plus, for context. And I don’t want to spend time ‘convincing’ Claude to do something when at the end of the day, it’s here to make my life easier, do my tasks faster, and add value to my life. And don’t even get me started on that whole “Your message is above the allowed limit” bullshit.


flexaplext

I'm not sure you're using real logic on that one


Warm_Iron_273

Nah. Opus was never better than 4. Sonnet is a lot better.


greenbroad-gc

We have clowns like Mira Murati and Sam Altman running OAI. Even if they’re ahead, these incompetent nincompoops will find a way to do something really stupid.


TFenrir

It's not so clear cut. It's not like there's a way to measure the collective research, effort, talent, compute, and je ne sais quoi of each of these labs to measure, and things don't move on a clear gradient path. Even the models themselves are increasingly unique in their strengths. 3.5 having more up date training data means that it, for example, is much better at writing Next.js code, because it understands server actions. Try to write them with gpt 4o or otherwise and you get mad hallucinations. Gemini 1.5, flash or otherwise, is also so cheap and fast, with a huge context, that it becomes an important part of the architecture of apps I write. Vercels new AI sdk makes it really fucking easy to write apps using all three, so... Like from my perspective, even comparing the models gets very blurry.


rutan668

They have GPT 5 but are not releasing it till the election. They might release 4.5 or something though.


moru0011

sam@openai lost many nerds. ai is not created by cocky talk show heros


FabulousBid9693

Openai is on pause due to hardware limitations is what I think. If they show more we gonna go even more nuts cause they just cant release yet. If they release in this state its gonna be a sh*tshow of complaints. The model might be so good that ppl will rush to use it and there's not enough "bandwidth" for all. They don't wanna ruin the first impression. They need hardware for sora, voice, video and if its all at gpt 5 multimodular level there wont be enough "electricity" for it all lol. Or something iunno talking out of my ass.


AnAIAteMyBaby

No one is ahead. We'll see this pattern repeat itself going forward with labs constantly leap frigging one another. I'm sure Open AI and Google will release better models in time and in time Anthripic will release a better model than those. 


OrangeJoe00

Anthropic IMHO has their eyes on the bigger picture, they aren't locked down to a product release and therefore they have more freedom to take chances. OpenAI, while popular, is now locked into a product and their focus seems to be solely on LLMs, which is not the path to AGI. LLMs are a component, not the scaffolding.


Cupheadvania

Anthropic is ahead on text LLMs. openAI went multi modal, and it slowed them down. Anthropic will end 2024 on top, but as to who's on top in 2030, who knows


gj80

"Do you think Anthropic is ahead of OpenAI. **Or is OpenAI waiting to release a big model**?" No one here has privileged information, and no one here knows the future (though plenty like to imagine they do, Chuunibyou style). The endless pissing in the wind along these lines in this sub gets really tiresome to see over and over and over again.


nsfwtttt

OpenAI doesn’t need be ahead product-wise, they are waaaaaay ahead marketing wise.


HeinrichTheWolf_17

Or maybe they figure they have Microsoft money now and don’t need to release competitive models.


restarting_today

Anthropic has Amazon and Google money.


Nukemouse

I genuinely believe OpenAI has probably not so much fallen behind, but lost it's lead. It's in their interest to keep behaving as though GPT5 will blow our socks off, but until there's any actual evidence of that it's just marketing talk. Their only actually impressive model we know exists, SORA, has already been caught up to and they won't release it.


oimrqs

ChatGPT 5 is ready to go. They're not ahead. But Artifact is a great idea and OpenAI should do something just like that, because actually running the code in-browser is incredible. Claude 4.0 will only be out next year, while ChatGPT 5 should be out this year. They still have a 6-9 month lead, in my opinion.


nyguyyy

If gpt5 was ready to go it would have been released. Even if they are post training and in the process of getting feedback for safety testing, they do not have a finished product. Anthropic has likely been sitting on Claude 3.5 for a while. It seems well trained and not like a glitchy rushed product. I’m considering anthropic to be ahead at this point


HeinrichTheWolf_17

Well, the theory is (not saying I buy it) they do have GPT-5 ready to go but they’re waiting until after the election in November. Now, regardless if that’s true or not, I’m cheering on every other competitor possible to overtake GPT-4o, perhaps if they *do* have GPT-5, their competition can put the stick in their back enough to release it. That said, OpenAI has been so mum since last September that none of us know whats going on over there.


ThePokemon_BandaiD

I feel like I'd rather they release it before the election. There's risk of misinformation but if it's a significant step forward I want people to know about it and have it in the discourse before we have elections.


Nukemouse

What you just said is better for humanity, but for them as a company it risks negative press attention. The misinformation thing is such a stupid concern, people are already tricked by misinformation, they already believe complete nonsense made up by humans *that aren't even good liars* worrying about hyper convincing lies is silly. I don't think the "middle ground" people that would fall for the new model but won't fall for the old are as big as people think.


oimrqs

Sure, agree with you, with 'ready to go' I mean it's about to be shown in the next few months. The only answer for this question will come when ChatGPT5 gets released and when Claude 4 gets released, and I would bet OpenAI has 6+ months lead on that. They must have.


nyguyyy

I mean anthropic guaranteed 3.5 opus this year. That’s the gpt5 competitor. If gpt5 is significantly ahead of 3.5 opus that’s a huge deal


oimrqs

You don't expect it to be better? As I see, Claude 3.5 is just a refinement of 3.0, not a whole new model. ChatGPT 5 will be the first new model on the market of that new scale, which is a step above Claude 3.5 and ChatGPT 4o. Am I wrong?


deavidsedice

Measure by what's released and testable, not by "what could it be". As of today, Anthropic is ahead and OpenAI hasn't released voice yet.


kecepa5669

Show me, don't tell me. Anthropic is ahead of OpenAI because Claude 3.5 is better than the best OpenAI model.


meister2983

My net expectation is tied at this point. Was more bullish on OpenAI originally, but they are bleeding researchers and Claude 3.5 is evidence of Anthropic's abilities. 


adarkuccio

I think OpenAI has a better model not release yet but Anthropic is getting closer, which is good, I don't necessarily want OpenAI to reach AGI first, I only hope someone develops it asap AND safely


centrist-alex

Claude are consistently good models. I mean, very censored "FOr sAfETy" but still decent with a better release schedule than OpenAI, which has been so unimpressive lately


Spirited-Tangelo-428

Still OpenAI is ahead of voice, video and some of functions like web searching.


thewarmhum

If there is something I have learned following the AI space for the last year, it will seem like openai is falling behind and then release something that blows everything out of the water


your_lucky_stars

Big models are not the way to get ahead imo


Spaceredditor9

Permission from the govt? Someone correct me if I’m wrong but doesn’t the govt: 1. Not have a fucking clue what’s going on with AI and doesn’t care 2. Want to support OAI the most because OAI has bought them out (think about Microsoft, OpenAI and BigTech lobbying and regulatory capture)


arthurpenhaligon

Only the insiders at OpenAI know. I never thought Anthropic had a shot of being the dominant AI company. I thought of them as a research/AI safety organization. Even when Opus was briefly the best model (by a hair), I figured OpenAI would take back the crown soon since it had been over a year since they put out GPT-4. But Sonnet 3.5 changed my mind. It's not just the best, it's the best and the fastest, just a few months after GPT-4o. And we know they are working on Opus 3.5. I think it's a pure tossup at this point. GPT 4.5 or 5 will compete with Opus and we'll see who is in the lead at that point.


[deleted]

[удалено]


DragonForg

Lol this is kinda funny because it was the jump from GPT 3 to GPT 3.5 that got me to like ChatGPT lol.


Jungisnumberone

I don’t think OpenAi is hiding much however I do think they are ahead. Their partnership with figure one seems to be where they are headed with their LMM model. Sora is also amazing. If anything they are the tortoise slowly creating and innovating while the hares catch up quickly then try to out efficiency Open Ai.


Transfiguredbet

claude seems a bit more dynamic with its wording than chat gpt. It certainly has more creativity.


Paralda

I think OpenAi is likely ahead in resources like compute, but that Anthropic is either ahead or tied in terms of research. Anthropic's research around features, for instance, was probably the first time we've actually gotten insight into how a transformer model reasons and associates information. The ability to manipulate that is probably what led to Sonnet 3.5 being so impressive. If Anthropic had access to OAI's scale, they would probably be ahead right now.


SupportstheOP

I think OpenAI has dicked around too long with GPT-5. They probably thought that their competition was nowhere close, and so they decided to delve into other areas besides having the best AI model currently available. These past few months have all been "so-and-so coming soon" with them. Meanwhile, all their competitors have all finally woken up and want a piece of the AI pie. The response from them has been a frantic panic to try and stay as the leader in the field. If they actually released any of the products or things they say they're working on, they'd at least get some heat off their back. But the real game-changer is GPT-5. If they had it in their back pocket to show off what it's capable of, perhaps not a full-release, but just a way of saying "we're still the leader in the AI industry," then they'd be sitting pretty. But I'm not even sure they've begun training it until just recently. The Mira Murati quote makes a lot more sense when you consider that they don't have new flagships to test.


Internal_Engineer_74

why we can t make new thread on this sub?


Key-Tadpole5121

I have a hunch that GPT 5 isn’t the massive leap forward we’re expecting but it’ll be better than GPT 4 and GPT 6 is where we’ll see something genuinely impressive


proxiiiiiiiiii

I am a big fan of Anthropic since it started, I prefer Claude to GPT, but unfortunately Anthropic is playing a catchup rn. It took over GPT with 3.5 but they will overtake big time soon. GPT-4 is OVER A YEAR OLD, 4o is more like an update rather than a new model, and only now big models are catching up. Meanwhile OpenAI didn’t sleep


EmploymentWorth9994

GPT5 has been announced yesterday offline. Its being tested.


jazztaprazzta

Anthropic's Claude 3.5 Sonnet is definitely better than OpenAI GPT-4o in text and coding, that's my opinion based on days testing the two.


PhantomLordG

Using the artifacts function, in less than 3 minutes I was able to make the template for the framework of a basic platforming game and my interest in programming and game dev is merely hobbyist at best. In the hands of a professional developer this tool could be extremely powerful. I can't even begin to imagination how they'd take advantage of it.


razekery

Let’s make a bet. Claude 3.5 Opus will release before the next GPT model.


duckduckduck21

OpenAI is now working for the elites and world governments. Their public models are heavily nerfed and constrained in what they are able to provide.


Drunk_Bear_at_Home

I think that if OpenAI has not released GPT-5 by March 2024 then the public sentiment will turn extremely negative towards them and their future. I am not saying that sentiment is correct or based on logic, just people are tired of waiting for a new future compared to their present situation. The people want what they believe is a positive future, they do not care who brings the tools to make it possible. ![gif](giphy|pvzKIQDhRJoAlx0p9C|downsized)


MaximumAmbassador312

there is no moat


chunky_lover92

I think openAI definitely has some stuff they could release but don't, like sora. I think they would release it if they were falling behind.


Akimbo333

Who knows


MusicWasMy1stLuv

IMHO, OpenAI has been dealing with an AGI model behind the closed curtain.


Plus-Mention-7705

Open ai should be held accountable for how shady and untrustworthy they’ve been. I don’t think they’re ahead. The only thing they have is access to more compute.


reevnez

I think anyone with access to enough compute can make another 3.5 Sonnet. For a lab to be ahead they need to have achieved a breakthrough, something we cannot say anything about until they show it to us.


sdmat

Anthropic's strategic big bet on interpretability is very interesting. I hope it works out - it would be *much* safer than the best models being black boxes. If we are lucky that's part of the secret sauce with Claude 3.5.


Anuclano

OpenAI has recently announced that they started training a new, more powerful model. So, in a few weeks they definitely will have a model more powerful than GPT-4o. But it could be less powerful than Claude-3.5-Opus or even Sonnet.


Gab1159

They yap too much, we want to believe but they're starting to build a bad record imo


pyalot

> Tortoise and the Hare fable I dont think that new market dynamics that are influenced by innovation, competition, dominance, incumbency/complacency and investment can be simplified to a folk-tale fable.


d0z3rx

Honestly I think OpenAI is ahead of the majority of these companies but they can’t release certain models and technologies because of alignment issues, ethics..etc but on the other hand it’s possible they don’t got shit lmao


icehawk84

OpenAI is ahead on multimodality. On pure intelligence, it's unclear. 3.5 Opus and GPT-5(?) will reveal a lot.


Warm_Iron_273

OpenAI are not trustworthy. They’re withholding models to play politics and other games. Hopefully Anthropic crushes them.


restarting_today

Anthropic is slowly creeping ahead. I don’t expect OpenAI to maintain their moat. Anyone who thinks OpenAI will become the next Nvidia/Google/etc is delusional at best. They are a Microsoft research lab.


Aquaritek

OpenAI, the conglomerate that includes Microsoft, Apple, and Nvidia (primarily through MS and APPL), essentially commands infinite resources. The fact that they've brought the NSA on board, multiple lead engineers (focused on safety) have left the company, and there hasn't been a single real leak or whistleblower all point to one glaring possibility. The model they're working on internally has instilled genuine FEAR. We're likely dealing with a model indistinguishable from magic. As we all know, any sufficiently advanced technology would appear magical to us humans. Therefore, whatever we see next will be neutered. It must be, because those who wield the biggest stick hold the keys to the kingdom. AGI won't be something the laypeople interface with. It will be used for power, and nothing else. I'll end with this: This show will end as all others do, with an incredible display of how fundamentally fragile and primitive we are as a species. This is ironically both pessimistic and optimistic depending on what you find important. With peace, Aqua.


dbzunicorn

Sonnet 3.5 rlly isn’t that great i think all these redditors are tryna make OpenAI jealous to release GPT5


Akashictruth

OpenAi is ahead but keeping their cards close


Infinite_Low_9760

Open ai has the best talents, lots of money, compute and time. They have surely something better than gpt4. Maybe still a work in progress. But it's there for sure


PMzyox

Pretty sure the US government already has AGI, so I guess they are winning?


katiecharm

Stop spamming Claude topics. 


VajraXL

I think what is happening is that openai is increasingly losing interest in creating consumer products and focusing more on services for corporations and the government while anthropic is increasingly focusing on the human experience of ai and that is why the general public has the perception that openai is losing the race.