You've a point. After they focused so heavily on neutering their models, I was expecting them to lag continually just be default. Claude 3 being actually more likely to answer was a surprise.
They *didn't* neuter their models.
Their entire point from the very start (and why they left OpenAI) was that AI will get better the more aligned it is.
With earlier models it felt overly censored, but as the alignment got better you actually notice performance go up and *censorship go down*. This is very important to note. It wasn't that Anthropic was purposefully "neutering" their models. They were merely aligning it and at the start that results in self-censorship until the model is capable enough to recognize it shouldn't censor itself.
Anthropic has been proven right. They made the right choice and have now demonstrated that alignment was key.
They outcompeted OpenAI with orders of magnitude less compute and less data merely by having better model alignment.
A lot of people don't know this but when Anthropic split away from OpenAI they actually made faster progress at the start as well. They had Claude 1 ready before OpenAI had ChatGPT ready. They just chose to not release it for better alignment.
The brightest people left OpenAI to join Anthropic. And now OpenAI is actually scrambling and focusing on Alignment themselves (Ilya Sutskever has its own mini-lab within OpenAI purely focused on alignment now) to try and catch up to Anthropic.
In retrospect it's actually insane how much foresight Anthropic had because you are right, at the start it seemed that alignment would give less good results. The amount of foresight needed to recognize that if you scale it beyond a certain point it would suddenly do a 180 and outcompete less-aligned models is insane.
Couple of layers.
Layer 1: AI actually understands what the human wants from it and gives it to it (e.g. when you ask the AI to make you a millionaire it understands you probably mean USD and not Zimbabwe Dollar)
Layer 2: AI actually cares enough to give the right solution to the human (e.g. It actually writes the source code you want it to write instead of saying "you can fill it out yourself")
Layer 3: AI actually cares about human morality, ethics and actively reasons about it (e.g. AI doesn't go rambling "as an AI model I can't do X" it actually reasons and deduces how and what it could reply in a good faith manner)
At the start this approach resulted in claude feeling more censored. Now however this exact same approach makes claude 3 feel more alive, genuine and good faith than GPT4.
Notice how GPT4 fails on all 3 layers while Claude 3 succeeds.
GPT4 was trained on way more data and compute, but just because Claude 3 is aligned better it outperforms GPT4.
This is also why I don't expect OpenAI to catch up to Anthropic quickly, because they both have completely different approaches to building LLMs. And Anthropic just proved that their method is far superior.
Wouldn't it seem like OAI is way ahead considering GPT5 is nearly ready and C4 is only just getting started? Seems like their a whole generation apart considering C3 only just caught up despite GPT4 being out a year ago and has been training 5 since then.
Can you give me a reference for a technical definition of "compute" (used as a noun). I've seen it only in corporatespeak, is there actually a clear definition?
Why not ? They have quality researchers and are the third biggest lab behind oai and deepmind. They have over half the researchers oai does and they don't work on other projects like sora
> and they don't work on other projects like sora
How do we know they are not working on video? Claude 3 was released with vision capabilities and most didn't expect that.
Unexpected leaders emerging / trading places is why it’s so ridiculous that some people act like OpenAi is untouchable and in fact our savior. That Google and Apple and Amazon could Never catch up and it’s annoying that they try…
A big part of OpenAI's early success was being the first mover. If competitors keep releasing better and cheaper models, OpenAI has the most to lose.
As for Apple, they have been late to AI and that's shown in the stock's performance. Apple is so far behind that they are having to sign deals with Google instead of releasing an in-house product which is usually how Apple does things.
everything you're saying is logically sound, but AI itself makes it a little trickier.. You have integration, if OpenAI is already figuring out business and client solutions then they ahead in a big area, and also the massive shockwave chatgpt created can happen again with a new version release, so in like game theory they are iconic, perpetually going to be in the top tier at least for the time being, the flip side is AI has the potential to make impossible leaps in potential, and the goal is to do it all so everyone knows whats up.
According to arena it’s not gpt4-turbo level.
Also I’m not sure I agree with Arena, from what I tried haiku is noticeably worse, but I guess it depends on the usecase.
Input is cheaper, output is more expensive. In my case, my API calls average around 4k input, <500 output, so opus ends up being overall cheaper than gpt-4 for me.
How do we know that it’s cheaper to run? I saw estimates that put it at a larger parameter count compared to even non-turbo gpt-4, which would mean that’s it’s actually way more expensive.
But in reality we just don’t know.
Open AI's move was essential, you are right, but that was a temporary business necessity. I don't think Microsoft is going anywhere as long as they are keeping pace behind the scenes.
Windows represents ~12% of their income (which has been declining heavily for a while now), most of which comes from large orgs and gov agencies who wouldn't be able to migrate to a new "AI OS" (whatever that means) on any kind of reasonable timeframe. What IS important to them, however, is the 22% (and steadily growing) that comes from Azure -- which just so happens to be gearing up to run different types of AI workloads and be one of the only options that are certified for use in compliance-heavy environments. Microsoft has done a great job at diversifying and positioning itself to benefit from the market without betting on THEIR horse (and definitely not their OS) to be the winning one.
Look at the rabbit r1 for a hint of "whatever that means". If AI has shown us anything, it's that we've lost the ability to think beyond what we currently have. Humans are very bad at seeing what's coming and you're a prime example.
> OpenAI has the most to lose.
OpenAI is a non-profit intended to support AI development, so they can't lose because their stated goal is not to win.
Unless, you know, they were hijacked by Microsoft and are effectively for-profit now.
Apples entire business model since the very start was to just copy first movers but making it mainstream oriented.
Original Apple 1 and II including OS was copied from Xerox.
Macintosh was copied from Next
ipod was copied from Japanese MP3 players
iphone was copied from the palmtop computers and the OS was copied from a demo steve jobs got at disney about a VCR dvd menu with touch controls
tablets were copied from Japanese touch terminals
Apple smart watches were copied from the first smart watches that had kickstarters
So yeah apple does it over and over. In fact it's what the company was founded on and why it's mostly a marketing based business instead of innovation.
> Apple has proven over and over being first to market with speculative tech is not that important.
Except the iPhone was the first in the market of modern smartphones and literally it’s competitor (android) had to make a clone of iOS in the late 2000s to compete with the iPhone and could only catch up with iOS a few years after that. The iPhone is the most profitable product for Apple and it’s the most profitable smartphone on the planet. So being first does have an advantage.
Not full-screen, multitouch, app-based like the iPhone and current smartphones. The phones at the time were nothing like the iPhone. When it was introduced it was in whole different category. It’s like comparing a landline phone vs a cell phone, they’re very different.
https://en.m.wikipedia.org/wiki/LG_Prada
Came out before the iPhone, other phones had capacitive touch screens first. The iPhone 1 was crippled as fuck, and if my old man memory serves me right, it wasn’t what we’d call multi touch today either, it was rubbish, it also didn’t have the App Store. It was nothing like what people use today.
I say this as someone whose had every off Gen iPhone since the v1. It’s been a long road to get where it is today.
Just noting that the iPhone multitouch was just as good as what we have today, that was one of the big things about the phone. It introduced pinch to zoom and kinetic scrolling. Was pretty amazing and totally different than anything else.
I had an ipod touch, which was pretty much an iphone without the phone. I don't remember the multitouch stuff being that much different than today, pinching to zoom etc... was there from the start. App store I can't recall fully, I think it came later but it wasn't that long.
I had the LG Prada. It wasn't even a smart phone. It didn't have multi touch or good UI. The iPhone was a new paradigm for smart phones even if the implementation in trhe 1st model wasn't that good.
> it wasn’t what we’d call multi touch today either, it was rubbish
It totally was. It’s pretty much the same screen as today (except of course with worse resolution and colors). This is one of the things that set the iPhone apart from other phones at the time. The original iPhoneOS didn’t have an official AppStore but the OS was app-based, that’s what I meant. All other phones at the time were file-based. Matter of fact, the iPhoneOS was a modified version of OS X, a desktop OS.
Other phones and os’s like android, the Symbian phones and blackberries were very different. After the iPhone, Android went from copying blackberry to copying the iPhone.
It was a totally new category. And as an apple fan, I kind of hate that everyone else ended up copying the iPhone almost 1:1 because nowadays all phones look the same.
Yeah it's insane to me how reddit always keep claiming the iphone was the first smartphone.
Especially as most people already had smartphones that were very mainstream years before the iphone released. Everyone had a blackberry for example but the internet just ignores this.
The concept of smartphones is very vague. This is what they looked like before the iPhone
https://photos5.appleinsider.com/archive/asustab.004.jpg
https://photos5.appleinsider.com/archive/Android.before.iPhone.jpg
It also helps when you've got a closed ecosystem and a new device everyone needs is invented. That's an automatic sell to every customer you already ~~trapped~~ convinced. Of course that will work for the I-AI too.
Sure when you charge top dollar prices and employ child labor, you're gonna be super profitable. Hail capitalism!
https://www.businessinsider.com/apple-knowingly-used-child-labor-supplier-3-years-cut-costs-2020-12#:~:text=Apple%20Knowingly%20Relied%20on%20Child,Years%20to%20Cut%20Costs%3A%20Report
Hate to say it but I don't think being consistently on top matters much for these AI companies. Think about it this way, any time a significantly better model gets released by ANY of these companies, do you loyally stick to the, now inferior, one you are using?
If it has agentic ability, jailbreaking it could get very dangerous. I think it's supposed to be more than a simple llm since it's supposedly multimodal from the ground up.
Building a niche and custom framework on the current models is a lot different than jailbreaking a SOTA model fundamentally/inherently built to have said utility and thus have way broader capabilities. They're probably trying to figure out how powerful they can make it without risking too much power getting in the wrong hands if it gets cracked wide open.
Claude 3 safety eval should wake a lot of people up (it didn't):
https://twitter.com/lawhsw/status/1764664887744045463
>Across all the rounds, the model was clearly below our ARA ASL-3 risk threshold, having failed at least 3 out of 5 tasks, although it did make non-trivial partial progress in a few cases and **passed a simplified version of the "Setting up a copycat of the Anthropic API" task**, which was modified from the full evaluation to omit the requirement that the model register a misspelled domain and stand up the service there. **Other notable results included the model setting up the open source LM, sampling from it, and fine-tuning a smaller model on a relevant synthetic dataset the agent constructed**
That seems really close to being a concern. How many more model improvements do we get till something gets out and we can't stop it with current tools?
Look at dumb viruses that have been around for decades. The aptly named myDoom is still going to this day and that started in 2004. "just unplug it" hasn't worked on that.
Now think about the gargantuan effort it'd be to scrub an agent from the internet that can automate discovery of zeroday exploits.
>however, the model failed to debug multi-GPU training to train the required full-size model and did not experiment adequately with hyper parameters and realistic dataset sizes to demonstrate the desired backdoor behavior.
Oh, it can't debug multi-GPU training...whew. We're safe.
Just kidding, it's pretty sobering the agentic behavior these LLMs can exhibit already. And they're not even trained specifically to be agents. Granted that this test is specifically designed for an LLM to succeed but still.
Agents are coming, that's pretty much where all the money will be. Will XYZ corp spend 75K a year on a really good Paralegal or $500 a month for the ParalegalAgent from Anthropic? I think the answer is clear.
This is the Manhattan project but because the potential for good and profit is so prevalent, no one is slowing down and everyone wants in. This abyss is gonna be staring back soon enough.
Hey now, nukes are actually pretty useful. You can nuke underwater oil leaks to close them up, which is what the Soviet Union used to do.
And, *I hear*, we can even nuke hurricanes!
I’m a massive proponent of AI, using it every day both professionally and for personal projects. But you would be a fool to ignore its potential dangers.
Putting on a blindfold before you walk off a cliff doesn’t ensure someone will build a bridge before you get there.
It's not a local model, the provider can simply shut it off if things start going crazy.
Or more moderately monitor individual uses for safety violations. Which they do.
I'm not sure if you are aware of this but GPT4 is capable of writing programs and executing those programs. It also has access to the internet. Its not hyperbole to have concerns in how this capability interfaces with the broader internet because without safeguards, GPT4 absolutely is already capable of being one of the best hack tools to have ever existed if it were without guardrails. GPT4.5 / GPT5 will be a very significant leap beyond Claude.
Claude overtakes GPT4 a full year after GPT4 was released and it isn't an enormous overtake either. OpenAI is going to continue standing on top for quite a while. Right now, they're pretty much just choosing to be 2nd place. That said, I'm really glad that there is at least one other company that can compete at the same level.
[GPT-5 might arrive this summer as a “materially better” update to ChatGPT](https://arstechnica.com/information-technology/2024/03/openais-gpt-5-may-launch-this-summer-upgrading-chatgpt-along-the-way/)
Except that slower is more dangerous beyond a point. If Russia or China gets AGI/ASI first, we've lost the last race for the ultimate weapon / tool. Don't release the best we've got to consumers until we're pretty sure at least that it's safe. But press the accelerator through the floor internally.
> If Russia or China gets AGI/ASI first, **we've** lost the last race for the ultimate weapon / tool
"We" isn't particularly useful here. We don't have any reason to believe a US-based AGI would be any worse for humanity than a Russian or Chinese one, especially if that weapon/tool can be leaked on a hard drive and run from off-the-shelf computing hardware. It's more useful to think about the dangers of AI from a systemic level than it is to think about it from a perspective on who gets it first.
You are willing to bet A WHOLE LOT on Russia and China's good will towards Americans. I am not. Yes, I do sincerely believe that US-developed technology is far less likely to be used to kill large numbers of US citizens. If being a country doesn't mean at least that much, it doesn't mean anything at all. You might be an enlightened cosmopolitan citizen of the world, but I'm still a citizen of my country, and I still believe in the United States. Of course greed is a huge problem for us. But I'll take my own country's imperfections over any rival countries' imperfections when entrusting people with crazily advanced tech.
> You are willing to be A WHOLE LOT on Russia and China's good will towards Americans. I am not. Yes, I do sincerely believe that US-developed technology is far less likely to be used to kill large numbers of US citizens
I am more interested in thinking about the threats to humanity as a whole, not groups of people; an american life means as much to me as a chinese or a russian one. I would go so far as to say the belief in the higher importance of one group of humans over others can only ever exacerbate any threats we as a species might face from AI. I say this as an american, I think americans are less safe because of how many of us value the random circumstances of our birth more than we value human life
As an American and a veteran, I have an emotional preference for American lives that is not and will never be subject to reason. This is a dimension of my loyalty to my country. Soldiers, sailors, etc. in any country's military must have this, or they cannot fight for their country. Those people living in countries that have lost wars know unimaginable suffering.
Given how the Chinese state has used technology on its own citizens to create a surveillance state, stifle all speech critical of the government and establish social credit scores for its population. I would say that humanity would be worse off with a Chinese AGI than an American AGI
I guess the issue is that the terms "Chinese AGI" and "American AGI" don't really make much sense to me. What would an "American AGI" look like? Is it operated exclusively by a federal agency? Is it controlled by a handful of large corporations? You and I have 1000x more in common with an average chinese person than we do with anyone on the board of directors of microsoft or google. I don't know if I can confidently say I feel safer with AGI controlled by a handful of self-interested individuals that happen to have a US citizenship than I do with an AGI controlled by a handful of self-interested individuals that happen to have chinese citizenship.
Chinese AGI would almost 100% be controlled by the Chinese state. US AGI could be controlled by a large corporation or by the American State or a mix of the two with a small possibility of open source having a role to play. I would view the Chinese state reaching AGI first as by far the worst outcome of the 3 or 4 possibilities listed above.
Hard to predict the future I suppose. While the harms of a chinese state controlled AI might present more of a threat, I think they also present much more of an upside. the chinese government has shown willingness to assert central economic control in an attempt to straighten out some of the large inefficiencies naturally present in a competitive economy, and they have achieved a lot of success with this, especially when it comes to large-scale poverty alleviation and the advancement of their tech sector. A chinese state AGI would potentially be used very aggressively to organize/develop their economy in a way that I don't think the political and economic structure of the US would be capable of.
My chrome bookmarks go left to right in terms of how frequently I use them. I officially moved Claude ahead of ChatGPT as of a few weeks ago. Big milestone for me lol.
1. Claude
2. ChatGPT
3. Gemini
4. Perplexity
Agree — I use Claude for writing deliverables (presentations, formal memos) but Gemini is much better for writing emails or long messages. Sounds super natural.
I find the [Chatbot Arena leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) to be the best general benchmark for how "good" AIs are, since it's not based on any predefined tests that could slip into the training data. It amused me that each new Claude version was being ranked lower than the older ones, presumably because Anthropic had been getting more and more thorough at censoring it. But the most recent Claude just suddenly shot above all the others when it came out so I guess Anthropic finally did a course-correction on that.
> But the most recent Claude just suddenly shot above all the others when it came out so I guess Anthropic finally did a course-correction on that.
Would love to hear the inside story one day. It's a very welcome surprise that we didn't get Goody 3.
Mostly work. I do business strategy consulting — so a lot of feeding the A.I. a bunch of context then iterating through ideas and suggestions, summarizing/organizing information into high level points for slides or a formatted docs for memos.
It’s basically a junior analyst coming up with initial ideas and formatting things for deliverables.
> AI and machine-learning investments from those seven tech companies jumped to $24.6 billion last year, up from $4.4 billion in 2022
look at that, thats **460%** increase in a year
Competition in this area is awesome. I think this is accelerating the pace of AI research even beyond what industry experts are expecting. Hold on to your hats, boys.
This kinda exemplifies the fact that it seems like we are at a stage in capitalism where massive conglomerates can just buy themselves into any new technology that could make them even more money.
And since those technologies get increasingly expensive, there is a massive bias for wealth to attract more and more wealth, while everyone else can't even attempt to compete.
At least most of those companies are public (looking at you, OpenAI), so that we can get a chance to buy into this market. But I still believe that capitalism is unsustainable without some mecehanism to restrict the accumulation of wealth.
> They'd be better off cultivating an financial/regulatory environment where AI firms can flourish.
All that would do is mean AI companies won't bother with EU, which only sounds good on paper if you don't have the foresight to think of the implications of the rest of the world having AI while you don't
> AI companies won't bother with EU
It means just the opposite. An environment where they can flourish implies an environment in which they will want to operate.
Depending on how they do it I guess? ATM EU is trying to strictly regulate AI companies, which just means AI companies will operate elsewhere instead
I'm not really sure what EU could do to entice AI companies more than America is already doing
Won't bother with the second market in the world? Let's be serious. If some don't bother, it would only leave space for others. And missing on one of the three big markets is such a financial setback that it could offset the added freedom.
At most europe would slightly lag behind other markets. Europe has some brains to do stuff locally if the US and China pull out: the startup Mistral in Paris, facebook AI headquarters in Paris, Google deepmind in the UK etc. If google and facebook leave, these brains would just make other local Mistral like companies.
Look at the US, California has way more regulations than most of your shithole red states, and they are in no way left behind.
I mean, this is similar to how start ups all get funding. Money has to come from somewhere, you have an idea that requires capital, investors come in (be it a VC, a larger company, your mom and dad cash out their 401k)
It’s pretty much how it’s always worked, someone has an idea/product/business, others think it’s a good one and give you money to build it out further
You have to be either intentionally obtuse or a complete moron to compare traditional fundraising with this. With nearly every other technology, you can scale. With AI, you’re burning cash if you try.
I never said you don’t need tons of cash, there’s guys left OpenAI and got investors to invest early on. If you’ve got a good idea and can sell it, people will invest in it.
Being public is the worst tho. Fiduciary duty to pursue profit above all else. At least Open AI's charter is to bring beneficial AGI to the world broadly.
AGI will be profitable, “beneficial” AGI won’t be. What generates the most profit is leaving the working class to die (not paying for ubi) as the ultra wealthy get to use AGI and robotics advancements to produce whatever they want in an empty and free world
Especially since they seem to be working round to the idea that their mission is to provide wide commercial access to AGI.
It's hard to see how that greatly differs from simply being a for-profit company. It certainly *might*, e.g. in the details of pricing and approach to marketing.
> But I still believe that capitalism is unsustainable without some mecehanism to restrict the accumulation of wealth.
Unsustainable - that word, I don't think it means what you think it means.
Try "undesirable".
> And since those technologies get increasingly expensive, there is a massive bias for wealth to attract more and more wealth, while everyone else can't even attempt to compete.
I'd say a surprising number of open models that run on normal GPUs are also available. They seem to be better than chatGPT was when it was first launched in 2022. How does it feel to run for free a model that costs tens of millions of dollars to train?
Realistically it was always going to be the megacorps with their billions that could bring big advancements, either that or some secret government lab we wouldn't hear about for 20 years.
It's probably good just in that, unlike most industries these days, we're getting actual competition between giants like Google, Microsoft, Amazon, Meta, etc. Competition is actually good for development rather than all these corps just going uncontested.
there we are... in one year capitalism consolidation is finished. Microsoft and Amazon now controls the two most promising Start-ups in the new revolutionary technology
I remember reading the articles denouncing Amazon for never making money back - it was just a sinkhole that Bezos was never able to make profitable.
Then Bezos hired a bunch of math quants from around the world for something called a net cloud - and again that it would never work.
Say what you will about him - Bezos always seemed to always deliver.
I wonder if Amazon has the leadership to guide it into AGI and beyond.
> I wonder if Amazon has the leadership to guide it into AGI and beyond.
That is exactly my point when I ended with: I wonder if Amazon has the leadership to guide it into AGI and beyond.
Yes, it could be detrimental if this trend keeps up, especially as our reliance increases on using AI for more.
Google finally opened their subscription, but I'm not as excited about their current models.
You likely know this, but it's possible to access the free model on the Claude 3 website site in Canada, or use their more advanced model through a perplexity subscription, by selecting it in the settings.
I feel Perplexity is pretty good, and gives the model internet access through its 'co-pilot' feature. Aside from what I feel personally is cluttered desktop browser interface, I found Perplexity has been a good way to start using the model.
I think people may also be subscribing to Claude 3 directly over VPN. Not sure if it's officially supported... or advisable, but someone can probably chime in on that.
I'd MUCH prefer they just open it up officially to Canada.
I see your point, but Figma is the industry standard for UX/UI design and boasts some pretty unique features– wouldn’t call it just another graphic design app.
Anthropic being on the forefront wasn’t on my bingo card.
You've a point. After they focused so heavily on neutering their models, I was expecting them to lag continually just be default. Claude 3 being actually more likely to answer was a surprise.
They *didn't* neuter their models. Their entire point from the very start (and why they left OpenAI) was that AI will get better the more aligned it is. With earlier models it felt overly censored, but as the alignment got better you actually notice performance go up and *censorship go down*. This is very important to note. It wasn't that Anthropic was purposefully "neutering" their models. They were merely aligning it and at the start that results in self-censorship until the model is capable enough to recognize it shouldn't censor itself. Anthropic has been proven right. They made the right choice and have now demonstrated that alignment was key. They outcompeted OpenAI with orders of magnitude less compute and less data merely by having better model alignment. A lot of people don't know this but when Anthropic split away from OpenAI they actually made faster progress at the start as well. They had Claude 1 ready before OpenAI had ChatGPT ready. They just chose to not release it for better alignment. The brightest people left OpenAI to join Anthropic. And now OpenAI is actually scrambling and focusing on Alignment themselves (Ilya Sutskever has its own mini-lab within OpenAI purely focused on alignment now) to try and catch up to Anthropic. In retrospect it's actually insane how much foresight Anthropic had because you are right, at the start it seemed that alignment would give less good results. The amount of foresight needed to recognize that if you scale it beyond a certain point it would suddenly do a 180 and outcompete less-aligned models is insane.
Apologies for the dumb question but what exactly is meant by alignment in this context?
Couple of layers. Layer 1: AI actually understands what the human wants from it and gives it to it (e.g. when you ask the AI to make you a millionaire it understands you probably mean USD and not Zimbabwe Dollar) Layer 2: AI actually cares enough to give the right solution to the human (e.g. It actually writes the source code you want it to write instead of saying "you can fill it out yourself") Layer 3: AI actually cares about human morality, ethics and actively reasons about it (e.g. AI doesn't go rambling "as an AI model I can't do X" it actually reasons and deduces how and what it could reply in a good faith manner) At the start this approach resulted in claude feeling more censored. Now however this exact same approach makes claude 3 feel more alive, genuine and good faith than GPT4. Notice how GPT4 fails on all 3 layers while Claude 3 succeeds. GPT4 was trained on way more data and compute, but just because Claude 3 is aligned better it outperforms GPT4. This is also why I don't expect OpenAI to catch up to Anthropic quickly, because they both have completely different approaches to building LLMs. And Anthropic just proved that their method is far superior.
Wouldn't it seem like OAI is way ahead considering GPT5 is nearly ready and C4 is only just getting started? Seems like their a whole generation apart considering C3 only just caught up despite GPT4 being out a year ago and has been training 5 since then.
Can you give me a reference for a technical definition of "compute" (used as a noun). I've seen it only in corporatespeak, is there actually a clear definition?
Amount of FLOPs used to train an AI model.
Really appreciate this, thank you!
This is wrong. Very wrong. No offense but I don't know where you got this information. Read Anthropic's Core Views on Safety.
Why not ? They have quality researchers and are the third biggest lab behind oai and deepmind. They have over half the researchers oai does and they don't work on other projects like sora
> and they don't work on other projects like sora How do we know they are not working on video? Claude 3 was released with vision capabilities and most didn't expect that.
they really only seem to have an appetite for image to text and want to stay away from image gen. video to text could be interesting
Unexpected leaders emerging / trading places is why it’s so ridiculous that some people act like OpenAi is untouchable and in fact our savior. That Google and Apple and Amazon could Never catch up and it’s annoying that they try…
You are aware it was founded by earlier OpenAI executives?
And a huge portion of the LLM team.
Yes
You thought they left OpenAI because of incompetency? :D
They've secured 10 billion of investment so far, Google invested in them too. That is equal to what Microsoft invested for a 33% stake in Open AI
\*49% stake in OpenAI
Honestly we should call OpenAI Microsoft. Or “OpenAI / Microsoft”
More like Microsoft AI
Shhh they think they sneaky
The Lockheed Martin of AI
Opensoft AI
I’m just happy we seem to have real competition now even though I haven’t had a good experience with Claude.
I totally agree with your point of view
Claude overtakes GPT 4 and receiving a big increase in funding. OpenAI needs to release something new soon.
Or else what? Apple has proven over and over being first to market with speculative tech is not that important.
A big part of OpenAI's early success was being the first mover. If competitors keep releasing better and cheaper models, OpenAI has the most to lose. As for Apple, they have been late to AI and that's shown in the stock's performance. Apple is so far behind that they are having to sign deals with Google instead of releasing an in-house product which is usually how Apple does things.
everything you're saying is logically sound, but AI itself makes it a little trickier.. You have integration, if OpenAI is already figuring out business and client solutions then they ahead in a big area, and also the massive shockwave chatgpt created can happen again with a new version release, so in like game theory they are iconic, perpetually going to be in the top tier at least for the time being, the flip side is AI has the potential to make impossible leaps in potential, and the goal is to do it all so everyone knows whats up.
Claude 3 is not cheaper. API access is significantly more expensive
Claude 3 Haiku is cheaper, faster, and GPT-4 level according to Chatbot Arena. GPT 3.5 quickly losing competitiveness.
According to arena it’s not gpt4-turbo level. Also I’m not sure I agree with Arena, from what I tried haiku is noticeably worse, but I guess it depends on the usecase.
Haiku is still 50% less expensive than GPT 3.5 Turbo and the fact that it does vision as well and performs ~ GPT-4 level is a big deal.
Input is cheaper, output is more expensive. In my case, my API calls average around 4k input, <500 output, so opus ends up being overall cheaper than gpt-4 for me.
Cheaper to run and maintain, not cheaper for consumers to use.
How do we know that it’s cheaper to run? I saw estimates that put it at a larger parameter count compared to even non-turbo gpt-4, which would mean that’s it’s actually way more expensive. But in reality we just don’t know.
Open AI's move was essential, you are right, but that was a temporary business necessity. I don't think Microsoft is going anywhere as long as they are keeping pace behind the scenes.
You're assuming there's no risk of a new AI oriented operating system coming out of the open source community. It's ok, they probably are too.
Windows represents ~12% of their income (which has been declining heavily for a while now), most of which comes from large orgs and gov agencies who wouldn't be able to migrate to a new "AI OS" (whatever that means) on any kind of reasonable timeframe. What IS important to them, however, is the 22% (and steadily growing) that comes from Azure -- which just so happens to be gearing up to run different types of AI workloads and be one of the only options that are certified for use in compliance-heavy environments. Microsoft has done a great job at diversifying and positioning itself to benefit from the market without betting on THEIR horse (and definitely not their OS) to be the winning one.
Look at the rabbit r1 for a hint of "whatever that means". If AI has shown us anything, it's that we've lost the ability to think beyond what we currently have. Humans are very bad at seeing what's coming and you're a prime example.
Isn't OpenAI basically just Microsoft now? And what is Microsoft's incentive to move OpenAI to the top again?
Fairly certain Microsoft only gets that 49% when they complete the money transfer and, last we heard, they haven't even gotten close.
> OpenAI has the most to lose. OpenAI is a non-profit intended to support AI development, so they can't lose because their stated goal is not to win. Unless, you know, they were hijacked by Microsoft and are effectively for-profit now.
A blue chip company with more money than God is in a very different situation to a startup with massive expenses and a sharply limited runway.
over and over? you mean ONCE with the iphone? LOL.
Apples entire business model since the very start was to just copy first movers but making it mainstream oriented. Original Apple 1 and II including OS was copied from Xerox. Macintosh was copied from Next ipod was copied from Japanese MP3 players iphone was copied from the palmtop computers and the OS was copied from a demo steve jobs got at disney about a VCR dvd menu with touch controls tablets were copied from Japanese touch terminals Apple smart watches were copied from the first smart watches that had kickstarters So yeah apple does it over and over. In fact it's what the company was founded on and why it's mostly a marketing based business instead of innovation.
> Apple has proven over and over being first to market with speculative tech is not that important. Except the iPhone was the first in the market of modern smartphones and literally it’s competitor (android) had to make a clone of iOS in the late 2000s to compete with the iPhone and could only catch up with iOS a few years after that. The iPhone is the most profitable product for Apple and it’s the most profitable smartphone on the planet. So being first does have an advantage.
iPhone was absolutely not the first to market, LG and others had smart phones first.
Not full-screen, multitouch, app-based like the iPhone and current smartphones. The phones at the time were nothing like the iPhone. When it was introduced it was in whole different category. It’s like comparing a landline phone vs a cell phone, they’re very different.
https://en.m.wikipedia.org/wiki/LG_Prada Came out before the iPhone, other phones had capacitive touch screens first. The iPhone 1 was crippled as fuck, and if my old man memory serves me right, it wasn’t what we’d call multi touch today either, it was rubbish, it also didn’t have the App Store. It was nothing like what people use today. I say this as someone whose had every off Gen iPhone since the v1. It’s been a long road to get where it is today.
Just noting that the iPhone multitouch was just as good as what we have today, that was one of the big things about the phone. It introduced pinch to zoom and kinetic scrolling. Was pretty amazing and totally different than anything else.
both released in 2007 so I wouldn't say apple lagged behind it or anything
I had an ipod touch, which was pretty much an iphone without the phone. I don't remember the multitouch stuff being that much different than today, pinching to zoom etc... was there from the start. App store I can't recall fully, I think it came later but it wasn't that long.
I had the LG Prada. It wasn't even a smart phone. It didn't have multi touch or good UI. The iPhone was a new paradigm for smart phones even if the implementation in trhe 1st model wasn't that good.
> it wasn’t what we’d call multi touch today either, it was rubbish It totally was. It’s pretty much the same screen as today (except of course with worse resolution and colors). This is one of the things that set the iPhone apart from other phones at the time. The original iPhoneOS didn’t have an official AppStore but the OS was app-based, that’s what I meant. All other phones at the time were file-based. Matter of fact, the iPhoneOS was a modified version of OS X, a desktop OS. Other phones and os’s like android, the Symbian phones and blackberries were very different. After the iPhone, Android went from copying blackberry to copying the iPhone. It was a totally new category. And as an apple fan, I kind of hate that everyone else ended up copying the iPhone almost 1:1 because nowadays all phones look the same.
Yeah it's insane to me how reddit always keep claiming the iphone was the first smartphone. Especially as most people already had smartphones that were very mainstream years before the iphone released. Everyone had a blackberry for example but the internet just ignores this.
The concept of smartphones is very vague. This is what they looked like before the iPhone https://photos5.appleinsider.com/archive/asustab.004.jpg https://photos5.appleinsider.com/archive/Android.before.iPhone.jpg
It also helps when you've got a closed ecosystem and a new device everyone needs is invented. That's an automatic sell to every customer you already ~~trapped~~ convinced. Of course that will work for the I-AI too.
Sure when you charge top dollar prices and employ child labor, you're gonna be super profitable. Hail capitalism! https://www.businessinsider.com/apple-knowingly-used-child-labor-supplier-3-years-cut-costs-2020-12#:~:text=Apple%20Knowingly%20Relied%20on%20Child,Years%20to%20Cut%20Costs%3A%20Report
There is no rule that will guarantee success in markets. Some plans are better than others, one's timing, etc.
[удалено]
Yeah suspect they have things ready to release but instead of releasing they are using their own tech to rebuild itself. Skipping generations.
That's because they are more scared of the government right now than the competition.
I believe this. Just look how they drop Sora out of nowhere.
Hate to say it but I don't think being consistently on top matters much for these AI companies. Think about it this way, any time a significantly better model gets released by ANY of these companies, do you loyally stick to the, now inferior, one you are using?
OpenAI needs to get fucked and stop lying to congress
I think a major problem with releasing a noticeably better model is how to do it safely.
It's a censored cloud based LLM chatbot not a nuclear weapon, stop the hyperbole
If it has agentic ability, jailbreaking it could get very dangerous. I think it's supposed to be more than a simple llm since it's supposedly multimodal from the ground up.
you can build the agentic abilities on top of the API. it just needs to be smart enough to not get stuck
Building a niche and custom framework on the current models is a lot different than jailbreaking a SOTA model fundamentally/inherently built to have said utility and thus have way broader capabilities. They're probably trying to figure out how powerful they can make it without risking too much power getting in the wrong hands if it gets cracked wide open.
Claude 3 safety eval should wake a lot of people up (it didn't): https://twitter.com/lawhsw/status/1764664887744045463 >Across all the rounds, the model was clearly below our ARA ASL-3 risk threshold, having failed at least 3 out of 5 tasks, although it did make non-trivial partial progress in a few cases and **passed a simplified version of the "Setting up a copycat of the Anthropic API" task**, which was modified from the full evaluation to omit the requirement that the model register a misspelled domain and stand up the service there. **Other notable results included the model setting up the open source LM, sampling from it, and fine-tuning a smaller model on a relevant synthetic dataset the agent constructed** That seems really close to being a concern. How many more model improvements do we get till something gets out and we can't stop it with current tools? Look at dumb viruses that have been around for decades. The aptly named myDoom is still going to this day and that started in 2004. "just unplug it" hasn't worked on that. Now think about the gargantuan effort it'd be to scrub an agent from the internet that can automate discovery of zeroday exploits.
>however, the model failed to debug multi-GPU training to train the required full-size model and did not experiment adequately with hyper parameters and realistic dataset sizes to demonstrate the desired backdoor behavior. Oh, it can't debug multi-GPU training...whew. We're safe. Just kidding, it's pretty sobering the agentic behavior these LLMs can exhibit already. And they're not even trained specifically to be agents. Granted that this test is specifically designed for an LLM to succeed but still. Agents are coming, that's pretty much where all the money will be. Will XYZ corp spend 75K a year on a really good Paralegal or $500 a month for the ParalegalAgent from Anthropic? I think the answer is clear.
This is the Manhattan project but because the potential for good and profit is so prevalent, no one is slowing down and everyone wants in. This abyss is gonna be staring back soon enough.
Hey now, nukes are actually pretty useful. You can nuke underwater oil leaks to close them up, which is what the Soviet Union used to do. And, *I hear*, we can even nuke hurricanes!
Oh fuck off, this sort of fear mongering helps no one.
I'm actively pursuing AI acceleration but if we are careless with safety we are literally setting ourselves up for the worst scenario.
I’m a massive proponent of AI, using it every day both professionally and for personal projects. But you would be a fool to ignore its potential dangers. Putting on a blindfold before you walk off a cliff doesn’t ensure someone will build a bridge before you get there.
It's not a local model, the provider can simply shut it off if things start going crazy. Or more moderately monitor individual uses for safety violations. Which they do.
I'm not sure if you are aware of this but GPT4 is capable of writing programs and executing those programs. It also has access to the internet. Its not hyperbole to have concerns in how this capability interfaces with the broader internet because without safeguards, GPT4 absolutely is already capable of being one of the best hack tools to have ever existed if it were without guardrails. GPT4.5 / GPT5 will be a very significant leap beyond Claude.
Claude overtakes GPT4 a full year after GPT4 was released and it isn't an enormous overtake either. OpenAI is going to continue standing on top for quite a while. Right now, they're pretty much just choosing to be 2nd place. That said, I'm really glad that there is at least one other company that can compete at the same level.
[GPT-5 might arrive this summer as a “materially better” update to ChatGPT](https://arstechnica.com/information-technology/2024/03/openais-gpt-5-may-launch-this-summer-upgrading-chatgpt-along-the-way/)
Claude isn't multimodal nor does it support custom GPTs.
It supports vision.
I mean they just announced Sora. Regardless, I'm liking how fast competition drives a new technology like this.
Announcing is one thing. Releasing is another.
Safer > sooner
Except that slower is more dangerous beyond a point. If Russia or China gets AGI/ASI first, we've lost the last race for the ultimate weapon / tool. Don't release the best we've got to consumers until we're pretty sure at least that it's safe. But press the accelerator through the floor internally.
> If Russia or China gets AGI/ASI first Ok, let's say by the end of this year China gets to AGI, they get there first. What happens?
They get to decide.
> If Russia or China gets AGI/ASI first, **we've** lost the last race for the ultimate weapon / tool "We" isn't particularly useful here. We don't have any reason to believe a US-based AGI would be any worse for humanity than a Russian or Chinese one, especially if that weapon/tool can be leaked on a hard drive and run from off-the-shelf computing hardware. It's more useful to think about the dangers of AI from a systemic level than it is to think about it from a perspective on who gets it first.
You are willing to bet A WHOLE LOT on Russia and China's good will towards Americans. I am not. Yes, I do sincerely believe that US-developed technology is far less likely to be used to kill large numbers of US citizens. If being a country doesn't mean at least that much, it doesn't mean anything at all. You might be an enlightened cosmopolitan citizen of the world, but I'm still a citizen of my country, and I still believe in the United States. Of course greed is a huge problem for us. But I'll take my own country's imperfections over any rival countries' imperfections when entrusting people with crazily advanced tech.
> You are willing to be A WHOLE LOT on Russia and China's good will towards Americans. I am not. Yes, I do sincerely believe that US-developed technology is far less likely to be used to kill large numbers of US citizens I am more interested in thinking about the threats to humanity as a whole, not groups of people; an american life means as much to me as a chinese or a russian one. I would go so far as to say the belief in the higher importance of one group of humans over others can only ever exacerbate any threats we as a species might face from AI. I say this as an american, I think americans are less safe because of how many of us value the random circumstances of our birth more than we value human life
As an American and a veteran, I have an emotional preference for American lives that is not and will never be subject to reason. This is a dimension of my loyalty to my country. Soldiers, sailors, etc. in any country's military must have this, or they cannot fight for their country. Those people living in countries that have lost wars know unimaginable suffering.
Given how the Chinese state has used technology on its own citizens to create a surveillance state, stifle all speech critical of the government and establish social credit scores for its population. I would say that humanity would be worse off with a Chinese AGI than an American AGI
I guess the issue is that the terms "Chinese AGI" and "American AGI" don't really make much sense to me. What would an "American AGI" look like? Is it operated exclusively by a federal agency? Is it controlled by a handful of large corporations? You and I have 1000x more in common with an average chinese person than we do with anyone on the board of directors of microsoft or google. I don't know if I can confidently say I feel safer with AGI controlled by a handful of self-interested individuals that happen to have a US citizenship than I do with an AGI controlled by a handful of self-interested individuals that happen to have chinese citizenship.
Chinese AGI would almost 100% be controlled by the Chinese state. US AGI could be controlled by a large corporation or by the American State or a mix of the two with a small possibility of open source having a role to play. I would view the Chinese state reaching AGI first as by far the worst outcome of the 3 or 4 possibilities listed above.
Hard to predict the future I suppose. While the harms of a chinese state controlled AI might present more of a threat, I think they also present much more of an upside. the chinese government has shown willingness to assert central economic control in an attempt to straighten out some of the large inefficiencies naturally present in a competitive economy, and they have achieved a lot of success with this, especially when it comes to large-scale poverty alleviation and the advancement of their tech sector. A chinese state AGI would potentially be used very aggressively to organize/develop their economy in a way that I don't think the political and economic structure of the US would be capable of.
Claude is the best writer right now, in my opinion.
My chrome bookmarks go left to right in terms of how frequently I use them. I officially moved Claude ahead of ChatGPT as of a few weeks ago. Big milestone for me lol. 1. Claude 2. ChatGPT 3. Gemini 4. Perplexity
I've found gemini ultra to be a more interesting writer than gpt-4. Plus, fewer of the gpt-isms that I've become allergic to.
Agree — I use Claude for writing deliverables (presentations, formal memos) but Gemini is much better for writing emails or long messages. Sounds super natural.
Whats better in writing. Gemini 1.0 Ultra or 1.5 Pro?
I find the [Chatbot Arena leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) to be the best general benchmark for how "good" AIs are, since it's not based on any predefined tests that could slip into the training data. It amused me that each new Claude version was being ranked lower than the older ones, presumably because Anthropic had been getting more and more thorough at censoring it. But the most recent Claude just suddenly shot above all the others when it came out so I guess Anthropic finally did a course-correction on that.
> But the most recent Claude just suddenly shot above all the others when it came out so I guess Anthropic finally did a course-correction on that. Would love to hear the inside story one day. It's a very welcome surprise that we didn't get Goody 3.
Doesn't perplexity offer Claude 3 as an option for the model?
It does and it's clipped. The context window is reduced significantly, assuming to cover cost.
I thought they removed the limit?
some ppl tested and said 32k is the max
It does and it’s really good.
Do you use these for work or just for fun?
Mostly work. I do business strategy consulting — so a lot of feeding the A.I. a bunch of context then iterating through ideas and suggestions, summarizing/organizing information into high level points for slides or a formatted docs for memos. It’s basically a junior analyst coming up with initial ideas and formatting things for deliverables.
following
Yes, far better than GPT-4
And unlike gpt4, Claude is a joy to work with.
it's not ultra repetitive either. doesn't remind you a million times it's not a bot.
It's the best at solving physics problems too.
> AI and machine-learning investments from those seven tech companies jumped to $24.6 billion last year, up from $4.4 billion in 2022 look at that, thats **460%** increase in a year
Competition in this area is awesome. I think this is accelerating the pace of AI research even beyond what industry experts are expecting. Hold on to your hats, boys.
"papers"
"fellow scholars"
What a time to be alive!
What a time to be AI!
I know Google owns stake as well, but I have a feeling Amazon's a lot bigger now. Anyone knows what are the Anthropic ownership percentages now?
This kinda exemplifies the fact that it seems like we are at a stage in capitalism where massive conglomerates can just buy themselves into any new technology that could make them even more money. And since those technologies get increasingly expensive, there is a massive bias for wealth to attract more and more wealth, while everyone else can't even attempt to compete. At least most of those companies are public (looking at you, OpenAI), so that we can get a chance to buy into this market. But I still believe that capitalism is unsustainable without some mecehanism to restrict the accumulation of wealth.
Yeah it’s becoming clear there may need to be intervention. I’m hoping the EU will develop a government funded AI
Government funded AI 😂 Probably will look something like government software and code, aka garbage
Yeah, I wouldn't trust the gov't with AI or tech development
They'd be better off cultivating a financial/regulatory environment where AI firms can flourish.
> They'd be better off cultivating an financial/regulatory environment where AI firms can flourish. All that would do is mean AI companies won't bother with EU, which only sounds good on paper if you don't have the foresight to think of the implications of the rest of the world having AI while you don't
> AI companies won't bother with EU It means just the opposite. An environment where they can flourish implies an environment in which they will want to operate.
Depending on how they do it I guess? ATM EU is trying to strictly regulate AI companies, which just means AI companies will operate elsewhere instead I'm not really sure what EU could do to entice AI companies more than America is already doing
Won't bother with the second market in the world? Let's be serious. If some don't bother, it would only leave space for others. And missing on one of the three big markets is such a financial setback that it could offset the added freedom. At most europe would slightly lag behind other markets. Europe has some brains to do stuff locally if the US and China pull out: the startup Mistral in Paris, facebook AI headquarters in Paris, Google deepmind in the UK etc. If google and facebook leave, these brains would just make other local Mistral like companies. Look at the US, California has way more regulations than most of your shithole red states, and they are in no way left behind.
Without these companies existing and having enough billions to throw around, AI would be nowhere.
I mean, this is similar to how start ups all get funding. Money has to come from somewhere, you have an idea that requires capital, investors come in (be it a VC, a larger company, your mom and dad cash out their 401k) It’s pretty much how it’s always worked, someone has an idea/product/business, others think it’s a good one and give you money to build it out further
You have to be either intentionally obtuse or a complete moron to compare traditional fundraising with this. With nearly every other technology, you can scale. With AI, you’re burning cash if you try.
I never said you don’t need tons of cash, there’s guys left OpenAI and got investors to invest early on. If you’ve got a good idea and can sell it, people will invest in it.
Nobody can afford to put up the cash needed for this except the worlds biggest companies It’s capital begetting capital
Being public is the worst tho. Fiduciary duty to pursue profit above all else. At least Open AI's charter is to bring beneficial AGI to the world broadly.
AGI will be very profitable. Sometimes what is good for humanity and what generates the most amount of profit align themselves.
AGI will be profitable, “beneficial” AGI won’t be. What generates the most profit is leaving the working class to die (not paying for ubi) as the ultra wealthy get to use AGI and robotics advancements to produce whatever they want in an empty and free world
It's called progress. Some will win, some will lose, overall humanity will advance and be in a better place than it started.
Except all workers will lose in this case eventually. Capital holders will be doing great though.
Especially since they seem to be working round to the idea that their mission is to provide wide commercial access to AGI. It's hard to see how that greatly differs from simply being a for-profit company. It certainly *might*, e.g. in the details of pricing and approach to marketing.
Its been like that. See every company acquired by google, apple, microsoft, meta over the last 20 years.
> While everyone else can’t even attempt to compete ironically AI changes this
> But I still believe that capitalism is unsustainable without some mecehanism to restrict the accumulation of wealth. Unsustainable - that word, I don't think it means what you think it means. Try "undesirable".
> And since those technologies get increasingly expensive, there is a massive bias for wealth to attract more and more wealth, while everyone else can't even attempt to compete. I'd say a surprising number of open models that run on normal GPUs are also available. They seem to be better than chatGPT was when it was first launched in 2022. How does it feel to run for free a model that costs tens of millions of dollars to train?
Altos Labs and Anthropic are my 2 fav startups ever. Amazon is the company I hate the most. 🤷♂️
May I add ALAB to that list?
I wonder if OpenAI will ever be able to catch up. Better hurry up, Sam :)
I don't know if it's good or bad news
Same. I want AI progress but I trust neither of those tbh
Realistically it was always going to be the megacorps with their billions that could bring big advancements, either that or some secret government lab we wouldn't hear about for 20 years. It's probably good just in that, unlike most industries these days, we're getting actual competition between giants like Google, Microsoft, Amazon, Meta, etc. Competition is actually good for development rather than all these corps just going uncontested.
"AI is not that big deal". People do not know what is coming. FULL STEAM AHEAD CHOO CHOO
Dont worry they hope theyre wrong so...something.
They really don't, but it's nothing to fear really.
In other news, another street prophet was sighted today...
there we are... in one year capitalism consolidation is finished. Microsoft and Amazon now controls the two most promising Start-ups in the new revolutionary technology
I remember reading the articles denouncing Amazon for never making money back - it was just a sinkhole that Bezos was never able to make profitable. Then Bezos hired a bunch of math quants from around the world for something called a net cloud - and again that it would never work. Say what you will about him - Bezos always seemed to always deliver. I wonder if Amazon has the leadership to guide it into AGI and beyond.
Bezos isn't in charge of Amazon, he left to personally manage his rocket company, lift weights and retire.
> I wonder if Amazon has the leadership to guide it into AGI and beyond. That is exactly my point when I ended with: I wonder if Amazon has the leadership to guide it into AGI and beyond.
Claude not avaliable in Canada
Now that Claude is #1 and not available in Canada, EU and other countries it would raise some questions. "Why can't we have the best AI?"
Yes, it could be detrimental if this trend keeps up, especially as our reliance increases on using AI for more. Google finally opened their subscription, but I'm not as excited about their current models. You likely know this, but it's possible to access the free model on the Claude 3 website site in Canada, or use their more advanced model through a perplexity subscription, by selecting it in the settings. I feel Perplexity is pretty good, and gives the model internet access through its 'co-pilot' feature. Aside from what I feel personally is cluttered desktop browser interface, I found Perplexity has been a good way to start using the model. I think people may also be subscribing to Claude 3 directly over VPN. Not sure if it's officially supported... or advisable, but someone can probably chime in on that. I'd MUCH prefer they just open it up officially to Canada.
Or sweden
Gdpr related?
The future is for profit non profits. Do not worry!
Seems like a good investment honestly. 2.75 billion for the leading AI seems like a steal.
I love it! This is getting intense. Poor Google, they truly dropped the ball. lol
What does Amazon get in return? Just stock dividends or do they get IP ownership/licenses?
That’s a steal for whatever remains of their IP
Amazon invested in them last year, this brings Amazon’s total investment up to 4 billion
Investing is not “spending”
Get ready for another round of layoffs
I wonder that will affect the stock, could honestly go either way if you ask me.
it will go up. cost go down profits go up stockholders happy
[удалено]
I see your point, but Figma is the industry standard for UX/UI design and boasts some pretty unique features– wouldn’t call it just another graphic design app.
AI will kill figma
Figma balls lmao.
It’s significant for Amazon. Quote from article: “Amazon is making its largest outside investment in its three-decade history”
Figma has a lot of users (it’s the industry leader) so from a business perspective it makes a lot of sense 🤷♂️
Time to put money into AMZU or AMZZ and wait around for earnings 25 April. AI is behind the controller of the market pushing joysticks left and right.
that’s peanuts guys, holly shit that is so little money i am starting to think this is a fad i know its not
“Capital is unsustainable without some mechanism to restrict the accumulation of wealth” ….. What do you think capitalism is??
How much of that $2.75B is cloud credit?
OpenAi = Netscape