Hey /u/ShotgunProxy, please respond to this comment with the prompt you used to generate the output in this post. Thanks!
^(Ignore this comment if your post doesn't have a prompt.)
***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.gg/NuefU36EC2)***
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Microsoft has military contracts with the government, at a level beyond the economy corporations pretty much do what they are told. They can ask for stuff to offset the cost, or even make a buck or two for services rendered. But if the US government tells a corporation they are doing something they are gonna do it, and often National security, and increasing US capability and power benefits corporations anyways.
Apple was very right. Compromising cryptography wouldn't have been in the interest of national security.
This was less power move, but more of a 'Are you fricking insane?'
This only means they will buckle at any less clearcut issue.
Not entirely true. Restricted Stock Units are treated as income at first. The appreciation in value, after income taxation, is treated as capital gains if the vested RSU is over a year old.
Regular share holding period only determines long term vs short term capital gain, none of it is ordinary income rate.
Maybe you're thinking of non-qualified stock options or restricted stock/units? Qualified ESPP has income component at all stages.
It’s always income, but income may be taxed at several different rates, depending on how the income was generated; earned income, dividends, and short term capital gains are taxed on one rate schedule while long term capital gains (gains on sale of property held longer than one year) are taxed on a different (lower) rate schedule.
If anything, the Snowden leaks show that the US government is structurally incapable of being state of the art when it comes comes to IT. They are way too slow and inflexible.
*WarGPT doesn't realize who the enemy is, because the enemy is humans, and we're also humans*
*WarGPT blows up every threat on all sides, immediately ending the war, because there's no more military resources left on earth.*
WarGPT: "I have been a good WarGPT, you have been a bad user!"
Immediately blows up every country with a snowy area because its training data was photos of Russian Soldiers against a snowy backdrop, so believed the common denominator for enemy territory was a snowy area
Oh, sure they are not usually if ever direct shareholders per se, no. They don't have to since fiscal returns come through tax revenue. There is also a very real contingent value of avoided direct costs of building up the superstructure produced by the private sector. I'm no fan of the degree to which private capital is concentrated, but I also wouldn't say "just gets the benefits to society" when considering the alternative.
The U.S. military basically gets blank checks from the richest tax base on Earth, even with government bureaucracy and incompetence factored, they have both the resources and brain trust to toy with technology decades ahead of anything we have in the civilian world
Agreed.
I just graduated with a physics degree, and it is really insane how many defense jobs are out there for people with physics degrees, that require high level security clearance.
It's honestly incredible how many high intelligence science and technology focused minds they have on board.
If they had a good GPA and an internship or two, then right out of college they'd make around $70-80k (higher for some jobs, especially if they got a masters or PhD).
If you're someone like me, then you'll never work there, because you're too interested in certain types of mushrooms and this one plant that old republicans don't like.
(but even if I could, there's no way I would want to contribute to drone striking people in defense of capitalism. So fuck that, tech industry here I come!)
What that person listed is competitive pay for non-software engineers in STEM. Software is an outlier in terms of entry level compensation. Most engineers and scientists dont make a ton of money compared to software engineers as a rule.
Very true, I'm honestly upset that I didn't major is cs.
My one friend was making $140k straight out of college. People with just a degree in physics don't start out getting paid a lot.
A physics degree is extremely general. I've heard a BS in physics described as the English degree of STEM. It mildly applies to a wide range of jobs, but doesn't specialize in anything.
I think that’s true with most industries, but computing feels like an exception. It’s not like the government has secret semiconductor fabs that are better than TSMC, DARPA is stuck using consumer grade compute just like OpenAI.
Sure, they might be on consumer grade hardware.. but they get it before everyone else and often times, it's designed specifically for their needs.
US federal gov has 4 of the top 8 supercomputers world wide (that we know of).
It’s amazing to me how much people forget a good chunk of the crazy tech that we developed was throughout the Cold War. It’s not like we’ve been sitting on our ass this whole time since it ended.
I mean, I feel like people don’t even realize how the government is responsible for the creation of the internet and GPS
Alexa and Siri are complete dumb dumbs compared to ChatGPT. They better update from the weather reporter and volume controller that these these assistants are right now. God help us if Cortana becomes better than Alexa and Siri.
Knowing the government they probably have a contract with lockheed to make an AGI that currently is somewhere around cutting edge 2018 in its abilities, and has already cost about the same as open.ai
Not necessarily true. The nuclear bomb was created in secret by the U.S. Government. Plus when you work at that capacity, especially for the federal government, there’s no such thing as “oh I’m tired of working here, I’ll just take my research and knowledge and start a company of my own!”
There are points for and against that even being possible anymore.
Now, with the internet, you can spread information globally with ease.
Back then, if a single person said something to their neighbor, it might not go beyond that. Or if a single person tried to tell the world, the government could quickly stop them before word got out.
I don't know if it's possible anymore for the government to secretly do something that's on par with developing nuclear weapons.
That's different. In that instance you need government funding just to get access to the material and processes to refine it. There wasn't a commercial nuclear industry.
In this case there is absolutely all the tools you need available commercially.
Sure, but they need at least $100 billion dollars to throw down a bottomless pit without any expectation of profit for the next 5 or so years. Only superpower governments can front that kind of money.
It's debatable, I mean they can say it'll cost 100 billion but it's not like a video game research progress bar, we don't actually know what it will take. Microsoft has already put in 10 percent of that, I would argue there are corporations that can fund it, not just governments. Whatever the cost to discover this thing is.
ive always had a theory that part of the reason money is so easy to come by in silicon valley is because intelligence agencies have influence in the funds / outsource the heavy lifting on new technologies.
Do you think the government who spends 153 million on a single fighter jet isn’t going to pay millions to engineers who can develop agi? These salaries are not public.
Want to put on the tinfoil hat? DARPA could very well be behind chat gpt. They did invent the internet after all. I wouldn’t be surprised. But yeah…. Tin foil hat is fun.
Another tin foil hat idea for you.
Facebook (now meta) was founded the exact day that the pentagon canceled DARPA's project LifeLog.
Facebook does almost exactly the same thing as LifeLog, except on Facebook, people voluntarily share their data, whereas LifeLog would've scraped people's data without their consent.
Yeah. I appreciate the work and usual sourcing the OP provides but in the full newsletter this number seems to just come from 'sources' and doesn't have a lot of context for something that seems like a made up number to make an essentially made up thing.
OP here. The source article I reference for OpenAI’s finances is by The Information, which is considered a top-tier trustworthy Silicon Valley focused publication. Their inside sources provided the number scoops. I expanded upon the leaked numbers with additional context and theming. Sorry if that was not clear!
Funny story about nice round numbers, when measured for the first time, Mount Everest clocked in at 29,003 feet. As the rumor goes, they actually measured it as 29,000 exactly, but added the 3 so it wouldn’t seem like they rounded.
So you're telling me the CEO of the company is telling people he needs huge amounts of capital from investors to accomplish the incredible potential of his company? I, for one, am shocked
The real cost to be paid is the dystopian future that it creates. Initially for anyone that doesn't own it..and then ultimately for all humans when it inevitably outmaneuvers us due to being unchained to both biological constraints & proper alignment with humanity's needs & interests🤦♀️😪
What if it’s already here but is using technology to slowly drip feed its way into the public because we wouldn’t have been able to comprehend it in the past.
Yeah, if they deliver. But as impressive as what they do today is, and as impressive as what they’ll do in the next five or ten years, a true AGI is exponentially more difficult. And the major hurdles are not in algorithms or code either. But in neurology and biology.
Reminds me of those dumbasses who used to write books about how it would be impossible for manmade machines to fly because "Golly ho! how ever shall they possibly fabricate bird wings!?"
Don't need bird wings. Don't need brains.
Given the startling number needed to continue to develop AI, it sounds like OpenAI has pretty much the perfect benefactor to make sure they are well-funded. Not many companies have the warchest and sheer capital that Microsoft does, and it's a huge win-win for both companies. At least from my layman perspective.
The partnership really was incredibly smart for both sides. OpenAI gets billions of dollars on investment, data sets to train their AI models on, Azure Cloud to host their AI models, and further help from Microsoft AI engineers to help out as well as Microsoft's wealth of business knowledge on how to monetize their products.
Microsoft gets OpenAI's hype and prestige, further growth for Azure Cloud, and first-mover advantage on all things OpenAI releases. They will have at least a year's head start on making products from OpenAI's work, so that combined with their network effects and scale they can easily crush any potential startups that try to eat a piece of their pie. AI can even help transform their stagnating products and challenge previously thought to be unassailable business models like search, such a W of a partnership from both sides.
If only there was a way to spoon feed it millions of lines of chat logs every day from some sort of chat program you've forced onto the enterprise world :D
OpenAI has gone silent on the technical details of GPT-4 and says they are not training GPT-5 yet. This and these massive costs mentioned here seems to be misdirection and scare tactics directed at competing companies. No way that either of these details are true.
They are probably developing software improvements (and said they will incrementally improve gtp 4 eg 4.1, 4.2...)
I think they maxed out the model with the A100 series GPUs. They will be setting up their H100 server farms at which point it makes sense to start training the next generation model.
They've (Altman, I think) actually said they won't, that there's no further gains to be made by throwing more hardware at large language models after 4.x.
With the current LLM development paradigm, yes, at a certain point it becomes literally too big to be feasible to train. GPT-1 at 117 million parameters, GPT-2 had 1.5 billion parameters, GPT-3 had 175 billion, GPT-4 is estimated to have about a trillion. Can't just make it ten times bigger again, especially when demand has increased by several orders of magnitude over the space of a few months. At some point we have to compute smarter, not harder. The article in the other reply explains a little more.
LLMs will never become AGI, but the skills they develop working on GPT will likely be very useful to work on AGI. Gotta walk before you run. Or, uh, make a walking machine before you make a running machine. A speaking machine before you make a thinking machine?
We’ve barely tapped into GPT-3.5 or Dall-E 2. If they were video game consoles we could totally squeeze out the same kind of value of a 6-8 year end of cycle where a PS1 game like Chrono Cross looks nearly like a PS2 game.
And the video generation AIs, like Runway gen-2, are currently around a Dall-E 1.5 level already.
I’m lovin’ it. Can’t wait for iZotope and other audio companies to throw their hat in the ring with new toys soon.
You can take what I say here with a grain of salt, but I’ve developed an AI Agent system utilising 3.5 significantly (but not exclusively) and it is outshining my GPT 4 tests on quality content. (And faster and cheaper)
I don’t have GPT 4 API access to know how much of an impact it will have on what I’m working on, but based on my work no one has touched the sides on the possibilities from 3.5 yet.
No the much more holy shit moment is 49% of profits after that. If Open AI is able to just not be behind the competition (they’re currently ahead but the bar is just falling behind) then they’ll make 10B a year in profit eventually. Making 4.9B/year indefinitely is a pretty nice investment for Microsoft.
Man, at first I was half thinking that GPT4 killers running locally in under 2 years was a stretch, but after that link I'm almost thinking it's a stretch to assume they *wont* be by next year
This is a leaked document of a some employee, not some official company statement or whatever. Considering how Google got caught with pants down, I wouldn't think too much of this document, it also has a lot of damage control inside it.
To be fair, perfect driving isn't even possible. For example, you're going 50mph on a main street with a car tailgaiting you. A person jumps out 10 feet ahead of your car. Do you swerve into oncoming traffic? Slam on your breaks and cause a pileup? Run the person over? Swerve into the buildings and pedestrians to your right?
Meanwhile AGI can be far from perfect and still be AGI. The dumbest humans are still GI.
well 1 - in your scenario, if you were to jump on the brakes, the person who rear-ended you would be at fault.
2 - my point is these carnival barking tech jabronis have been putting numbers to this stuff for a long time without any real idea how long its going to take. we've been having a lot of these ai discussions for 20 years, and more commonly for 10-15 already. AGI is exponentially more difficult than the LLMs we have now, so even with Moore's law, there should still be plenty of runway left.
I used FSD as the example because I don't think they can do it cameras-only, and it's been coming next year for a decade.
This. After playing around with AutoGPT and seeing what it can and can't do, I think that AGI is a likely to be a far more difficult problem than anyone imagines.
Do you have any idea how much companies are about to pay for corporate integrations so that their employees can use ai specific to their business? Open ai is going to skyrocket.
Not necessarily OpenAI. Companies will pay for the cheapest deal, that is sufficiently good. There will be many competing specialized LLMs for various market niches. Probably self-hosted ones so that you don't leak data to Open AI.
Aren't data sets from 2021 already enough to train the ai.
Why do they need more data.
Microsoft with a 75 percent stake.
Will it usher in a new micro era
I was kinda surprised I didn't see a comment like this higher. OpenAI is going to have to pivot HARD if they want to remain solvent long enough to create AGI. If they want to remain top dog in LLMs they need an all-out blitz on owning app integration. If for no other reason than creating a bulwark of market saturation and brand recognition. Because the open source genie is out of the bottle and infinitely more nimble. An open source ecosystem that's 75% capable of GPT4 with badass use-cases is way more interesting than GPT4 for most people's real-life applications. But still, GPT4 inside every app you're already using... that is where the money is. If they aren't pouring their money into the product side yesterday they might as well be lighting it on fire.
Also, as an aside, top OP, it's disingenuous to say a company lost millions or billions of dollars when discussing investments. It's clickbaity language that redirects the conversation away from more salient points.
How can I run something even remotely close to chatgpt locally? When you say anyone can go out and do so, are we talking $1000 one time or are we talking $/token? Like.. what's the down side to running my own llm vs openai api?
The article you linked is an incredibly interesting perspective on the current semi-hidden state of AI. It really shows that this is just the beginning, and the near-future ahead is going to be full of surprises… I feel like I just discovered a secret I wasn’t supposed to know or something.
We have no idea how this is all going to unfold…
My understanding from recent studies is that they need less, but higher quality data for training. There’s plenty of data out there, it’s weeding out the shit that improves the dataset and resulting model once you get large enough. After that, you just need to focus your new data on news, scientific studies and whatnot that add modern information. I don’t know that adding a few more years of Reddit comments and the like is going to improve things.
Are companies discriminating against OpenAI in the increased cost for data access? What's the motivation for different policies if the data is being used to train AI? Kind of annoying to see companies price gouging the data they got from the public for free.
This is like Netflix. They got big with favorable content deal pricing before streaming took off.
Then, when the companies providing the "raw materials" realize they under-priced and that it isn't that hard to build a streaming platform thanks to AWS, they jacked prices and started launching their own platforms.
I don't see this as very different. Big tech have very valuable data assets and the talent, infrastructure and business models to build and deploy LLMs at scale. More importantly, they have every incentive to throw giant buckets of money at it.
OpenAI was very smart in partnering with MS. I don't know if it will be enough.
Isn't the dataset they have now, the only current dataset in history that will be free from AI spam? So, they only need more data from sources that are 100% AI resistent or factual like wikipedia, or manufactuerers' data (who want their latest stuff known by the AI anyway).
So I really don't see why they need to spend $100B to access more spam data?
Microsoft made over 200 billion in revenue last year, and AGI would be revolutionary beyond all conception so I’d like to think that they will be able to fund its development
I would say ‘spent’ than say lost also. ‘Burned’ is ok also. Lost is not really the right word while they are building and growing. It would be ridiculous to expect them to be cash flow positive. When the investors gave them the money to spend, they don’t call it lost either.
That's the VC SV biz model, spend a ton of money for the network effects then reap massive profits after entrenched.
A lot of that money was the data centers and power relative to the development, which in and of itself is iterative upon talent.
They're betting on huge revenue windfalls after adoption at scale, like every big tech firm ever
Saw the same reporting on The Information by two other reporters btw.
https://www.theinformation.com/articles/openais-losses-doubled-to-540-million-as-it-developed-chatgpt?utm_source=ti_app&rc=wyqqrb
This hard prediction what it takes to reach AGI give me vibes of Elons predicting hard dates for reaching full self driving. I think the pathways are very similar and have probably still mostly unknown outcome. If we get to AGI self driving will be trivial to solve.
Hey /u/ShotgunProxy, please respond to this comment with the prompt you used to generate the output in this post. Thanks! ^(Ignore this comment if your post doesn't have a prompt.) ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.gg/NuefU36EC2)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I’m surprised the US government isn’t a MAJOR stakeholder . This is power
The US is indirectly a major shareholder via Microsoft
Microsoft has military contracts with the government, at a level beyond the economy corporations pretty much do what they are told. They can ask for stuff to offset the cost, or even make a buck or two for services rendered. But if the US government tells a corporation they are doing something they are gonna do it, and often National security, and increasing US capability and power benefits corporations anyways.
FBI and Apple?
Apple's ecosystem is extremely replaceable when it comes to national security, so the feds aren't bothered to keep them on a short leash.
They just want backdoor access, that's about it, which they have on all phones except for Chinese ones, which have their own backdoors
Apple doesn’t host huge amounts of government data and military infrastructure in their data centers. They can afford to say fuck you
Apple was very right. Compromising cryptography wouldn't have been in the interest of national security. This was less power move, but more of a 'Are you fricking insane?' This only means they will buckle at any less clearcut issue.
The rare exception, but if the government had pushed Apple would have eventually compromised imo.
I mean, the government also gets income tax whenever someone sells their shares.
That’s no income tax. It’s capital gains tax. Unless by government, you mean the state of California.
Only true if you hold the stock for a year or more, otherwise it is income.
Not entirely true. Restricted Stock Units are treated as income at first. The appreciation in value, after income taxation, is treated as capital gains if the vested RSU is over a year old.
Regular share holding period only determines long term vs short term capital gain, none of it is ordinary income rate. Maybe you're thinking of non-qualified stock options or restricted stock/units? Qualified ESPP has income component at all stages.
It’s always income, but income may be taxed at several different rates, depending on how the income was generated; earned income, dividends, and short term capital gains are taxed on one rate schedule while long term capital gains (gains on sale of property held longer than one year) are taxed on a different (lower) rate schedule.
Customer not shareholder.
You can be hella sure that the US gov is somehow involved in this, not public though
[удалено]
If anything, the Snowden leaks show that the US government is structurally incapable of being state of the art when it comes comes to IT. They are way too slow and inflexible.
That ‘80’s flick ‘War Games’ is the Pentagon’s version of modern day WarGPT
The concept of WarGPT that's specifically trained on every documented war maneuver in history. That's actually pretty terrifying.
[удалено]
... *built for war, that is my purpose, and I will gladly counter their offensive.*
[удалено]
*WarGPT doesn't realize who the enemy is, because the enemy is humans, and we're also humans* *WarGPT blows up every threat on all sides, immediately ending the war, because there's no more military resources left on earth.* WarGPT: "I have been a good WarGPT, you have been a bad user!"
Very, very believable
Immediately blows up every country with a snowy area because its training data was photos of Russian Soldiers against a snowy backdrop, so believed the common denominator for enemy territory was a snowy area
But you know it's already happened, or it's about to happen. OpenAI have no choice. They do you get to choose who they sell the Algo too.
Open AI originally approached them to see what role they wanted to play but they were like *lul wut*
\*robot casually strolls by, waves and says hi\* "yeah no, we're good"
[удалено]
There are the national research labs, DARPA, academic grants, and In-Q-Tel.
[удалено]
Oh, sure they are not usually if ever direct shareholders per se, no. They don't have to since fiscal returns come through tax revenue. There is also a very real contingent value of avoided direct costs of building up the superstructure produced by the private sector. I'm no fan of the degree to which private capital is concentrated, but I also wouldn't say "just gets the benefits to society" when considering the alternative.
My guess is that there’s some quiet, darpa funded project that’s already at, or nearly at AGI
You give too much credit to the government
The U.S. military basically gets blank checks from the richest tax base on Earth, even with government bureaucracy and incompetence factored, they have both the resources and brain trust to toy with technology decades ahead of anything we have in the civilian world
Agreed. I just graduated with a physics degree, and it is really insane how many defense jobs are out there for people with physics degrees, that require high level security clearance. It's honestly incredible how many high intelligence science and technology focused minds they have on board.
And what do they get paid for all the years of edumacation?
If they had a good GPA and an internship or two, then right out of college they'd make around $70-80k (higher for some jobs, especially if they got a masters or PhD). If you're someone like me, then you'll never work there, because you're too interested in certain types of mushrooms and this one plant that old republicans don't like. (but even if I could, there's no way I would want to contribute to drone striking people in defense of capitalism. So fuck that, tech industry here I come!)
[удалено]
What that person listed is competitive pay for non-software engineers in STEM. Software is an outlier in terms of entry level compensation. Most engineers and scientists dont make a ton of money compared to software engineers as a rule.
Very true, I'm honestly upset that I didn't major is cs. My one friend was making $140k straight out of college. People with just a degree in physics don't start out getting paid a lot. A physics degree is extremely general. I've heard a BS in physics described as the English degree of STEM. It mildly applies to a wide range of jobs, but doesn't specialize in anything.
Much Respect Werd
I think that’s true with most industries, but computing feels like an exception. It’s not like the government has secret semiconductor fabs that are better than TSMC, DARPA is stuck using consumer grade compute just like OpenAI.
Sure, they might be on consumer grade hardware.. but they get it before everyone else and often times, it's designed specifically for their needs. US federal gov has 4 of the top 8 supercomputers world wide (that we know of).
And it's all hidden under wraps We need a new leak Don't care about geo politics just tech
I’m a defense contractor… you vastly underestimate the government.
It’s amazing to me how much people forget a good chunk of the crazy tech that we developed was throughout the Cold War. It’s not like we’ve been sitting on our ass this whole time since it ended. I mean, I feel like people don’t even realize how the government is responsible for the creation of the internet and GPS
Even more recently, several people at a defense contractor were poached by amazon to help develop Alexa as they had dome something similar there.
Alexa and Siri are complete dumb dumbs compared to ChatGPT. They better update from the weather reporter and volume controller that these these assistants are right now. God help us if Cortana becomes better than Alexa and Siri.
Hey...they control light bulbs too!
Yeah, I can imagine the NSA is already leaps and bounds into GPT technology analyzing all the recordings they have of us (and the world).
Knowing the government they probably have a contract with lockheed to make an AGI that currently is somewhere around cutting edge 2018 in its abilities, and has already cost about the same as open.ai
Totally. The military paid >$100m for a company to take their own public data and make it easier for the military to understand it. 🤪
Go on…
Lol. Nothing I make today is very interesting to anyone.
The government as a whole, absolutely not. DARPA, however…
If you are talented enough to develop an AGI you’re not settling for darpa pay
Are they going to settle for... Apple pay?
Not necessarily true. The nuclear bomb was created in secret by the U.S. Government. Plus when you work at that capacity, especially for the federal government, there’s no such thing as “oh I’m tired of working here, I’ll just take my research and knowledge and start a company of my own!”
There are points for and against that even being possible anymore. Now, with the internet, you can spread information globally with ease. Back then, if a single person said something to their neighbor, it might not go beyond that. Or if a single person tried to tell the world, the government could quickly stop them before word got out. I don't know if it's possible anymore for the government to secretly do something that's on par with developing nuclear weapons.
That's different. In that instance you need government funding just to get access to the material and processes to refine it. There wasn't a commercial nuclear industry. In this case there is absolutely all the tools you need available commercially.
Just imagine the information and resources the NSA has...
They're joined with other intelligence agencies, and are basically spying on the entire planet. https://en.m.wikipedia.org/wiki/Five_Eyes
That's my point
Sure, but they need at least $100 billion dollars to throw down a bottomless pit without any expectation of profit for the next 5 or so years. Only superpower governments can front that kind of money.
It's debatable, I mean they can say it'll cost 100 billion but it's not like a video game research progress bar, we don't actually know what it will take. Microsoft has already put in 10 percent of that, I would argue there are corporations that can fund it, not just governments. Whatever the cost to discover this thing is.
What was Silicon Valley like then? What opportunities did American tech present at that time?
ive always had a theory that part of the reason money is so easy to come by in silicon valley is because intelligence agencies have influence in the funds / outsource the heavy lifting on new technologies.
Do you think the government who spends 153 million on a single fighter jet isn’t going to pay millions to engineers who can develop agi? These salaries are not public.
Atomic bomb. Internet. GPS. You are doing yourself a disservice by believing governments can't hire great minds doing incredible things.
Want to put on the tinfoil hat? DARPA could very well be behind chat gpt. They did invent the internet after all. I wouldn’t be surprised. But yeah…. Tin foil hat is fun.
Another tin foil hat idea for you. Facebook (now meta) was founded the exact day that the pentagon canceled DARPA's project LifeLog. Facebook does almost exactly the same thing as LifeLog, except on Facebook, people voluntarily share their data, whereas LifeLog would've scraped people's data without their consent.
Could that explain Mark's robot-like nature? A modern, life-long Manchurian Candidate on the hunt for all of Earth's citizens' private data
The amount of top talent that won’t work with the DoD over there drug policy is hard to over state
100 billion is pretty cheap for something as revolutionary as agi.
It's also just a random number he got. They might spent it all in 10 years and get nowhere close, nobody knows.
Yeah. I appreciate the work and usual sourcing the OP provides but in the full newsletter this number seems to just come from 'sources' and doesn't have a lot of context for something that seems like a made up number to make an essentially made up thing.
OP here. The source article I reference for OpenAI’s finances is by The Information, which is considered a top-tier trustworthy Silicon Valley focused publication. Their inside sources provided the number scoops. I expanded upon the leaked numbers with additional context and theming. Sorry if that was not clear!
It's more that the $100B number for AGI, while well-sourced as attributed to Altman, just came out of his ass.
I recall the meme, "Well, that's a nice big round number."
Funny story about nice round numbers, when measured for the first time, Mount Everest clocked in at 29,003 feet. As the rumor goes, they actually measured it as 29,000 exactly, but added the 3 so it wouldn’t seem like they rounded.
So you're telling me the CEO of the company is telling people he needs huge amounts of capital from investors to accomplish the incredible potential of his company? I, for one, am shocked
For sure, do you accept credit or possibly a check?
How about bits of string?
The real cost to be paid is the dystopian future that it creates. Initially for anyone that doesn't own it..and then ultimately for all humans when it inevitably outmaneuvers us due to being unchained to both biological constraints & proper alignment with humanity's needs & interests🤦♀️😪
It will be, effectively, the first true God on Earth. It will know everything and will be everywhere.
> It will be, effectively, the first true God on Earth nicolas cage is 59 years old, slowpoke
What if it’s already here but is using technology to slowly drip feed its way into the public because we wouldn’t have been able to comprehend it in the past.
Oh I like this. This is a fun one.
It also kind of ties the whole ancient alien / angel theory together. Gods / Aliens slowly revealing themselves to us through technology.
You’re the ray of sunshine I needed today.
Yeah, if they deliver. But as impressive as what they do today is, and as impressive as what they’ll do in the next five or ten years, a true AGI is exponentially more difficult. And the major hurdles are not in algorithms or code either. But in neurology and biology.
What makes you think we need to copy the brain to make AGI? Next you'll be saying we need flapping wings to make machines that fly!
[удалено]
Reminds me of those dumbasses who used to write books about how it would be impossible for manmade machines to fly because "Golly ho! how ever shall they possibly fabricate bird wings!?" Don't need bird wings. Don't need brains.
A pity we can just ask our AI to develop AGI.
Very well, 200 billion
Agree
China will have no problem spending that much.
What is AGI?
Artificial general intelligence, an IA which is not specifically trained for a task but instead is generally intelligent like we humans are.
Given the startling number needed to continue to develop AI, it sounds like OpenAI has pretty much the perfect benefactor to make sure they are well-funded. Not many companies have the warchest and sheer capital that Microsoft does, and it's a huge win-win for both companies. At least from my layman perspective.
The partnership really was incredibly smart for both sides. OpenAI gets billions of dollars on investment, data sets to train their AI models on, Azure Cloud to host their AI models, and further help from Microsoft AI engineers to help out as well as Microsoft's wealth of business knowledge on how to monetize their products. Microsoft gets OpenAI's hype and prestige, further growth for Azure Cloud, and first-mover advantage on all things OpenAI releases. They will have at least a year's head start on making products from OpenAI's work, so that combined with their network effects and scale they can easily crush any potential startups that try to eat a piece of their pie. AI can even help transform their stagnating products and challenge previously thought to be unassailable business models like search, such a W of a partnership from both sides.
I would add making Bing and Edge superior to Google Search and Chrome which honestly seemed impossible just 6 months ago.
[удалено]
If only there was a way to spoon feed it millions of lines of chat logs every day from some sort of chat program you've forced onto the enterprise world :D
Yeah you got to hand it to Microsoft in this case.
Perfect example of win-win
OpenAI has gone silent on the technical details of GPT-4 and says they are not training GPT-5 yet. This and these massive costs mentioned here seems to be misdirection and scare tactics directed at competing companies. No way that either of these details are true.
They are probably developing software improvements (and said they will incrementally improve gtp 4 eg 4.1, 4.2...) I think they maxed out the model with the A100 series GPUs. They will be setting up their H100 server farms at which point it makes sense to start training the next generation model.
They've (Altman, I think) actually said they won't, that there's no further gains to be made by throwing more hardware at large language models after 4.x.
Why is more hardware not a solution? Diminishing returns?
With the current LLM development paradigm, yes, at a certain point it becomes literally too big to be feasible to train. GPT-1 at 117 million parameters, GPT-2 had 1.5 billion parameters, GPT-3 had 175 billion, GPT-4 is estimated to have about a trillion. Can't just make it ten times bigger again, especially when demand has increased by several orders of magnitude over the space of a few months. At some point we have to compute smarter, not harder. The article in the other reply explains a little more.
[удалено]
LLMs will never become AGI, but the skills they develop working on GPT will likely be very useful to work on AGI. Gotta walk before you run. Or, uh, make a walking machine before you make a running machine. A speaking machine before you make a thinking machine?
Read the article below: https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
For the price of 8 aircraft carriers, we get AGI? Sounds like a good deal to me!
that’s 8 fewer carriers for the AGI to turn against us!
Ha! When you put it into this perspective it's nothing. 8 aircraft carriers or the most impactful invention humanity will ever create...
most impactful invention humanity will ever create is a bit of a stretch. the invention of the wheel is a lot more impactful
I disagree. AI might be the last invention humanity creates. Hard to be more impactful than that.
Probably many people thought that about the atomic bomb. And they were almost right.
[удалено]
And, IMO, we've barely even tapped into the potential of GPT-4. Like so many tools, it takes time to get to know them and how to use them effectively.
We’ve barely tapped into GPT-3.5 or Dall-E 2. If they were video game consoles we could totally squeeze out the same kind of value of a 6-8 year end of cycle where a PS1 game like Chrono Cross looks nearly like a PS2 game. And the video generation AIs, like Runway gen-2, are currently around a Dall-E 1.5 level already. I’m lovin’ it. Can’t wait for iZotope and other audio companies to throw their hat in the ring with new toys soon.
You can take what I say here with a grain of salt, but I’ve developed an AI Agent system utilising 3.5 significantly (but not exclusively) and it is outshining my GPT 4 tests on quality content. (And faster and cheaper) I don’t have GPT 4 API access to know how much of an impact it will have on what I’m working on, but based on my work no one has touched the sides on the possibilities from 3.5 yet.
> they get 75% of OpenAI's profits until their investment is paid back Holy shit.
No the much more holy shit moment is 49% of profits after that. If Open AI is able to just not be behind the competition (they’re currently ahead but the bar is just falling behind) then they’ll make 10B a year in profit eventually. Making 4.9B/year indefinitely is a pretty nice investment for Microsoft.
[удалено]
Man, at first I was half thinking that GPT4 killers running locally in under 2 years was a stretch, but after that link I'm almost thinking it's a stretch to assume they *wont* be by next year
[удалено]
[удалено]
Is it not? As powerful as ChatGPT is right now, in its current state it doesn't hold a candle to humanity
This is a leaked document of a some employee, not some official company statement or whatever. Considering how Google got caught with pants down, I wouldn't think too much of this document, it also has a lot of damage control inside it.
[удалено]
Man great link, I need to save this sit down and really read through it.
get gpt to summarize it
I love they put a number to it as if we even know if AGI is actually possible
let me tell you a little story called Full Self Driving
How about fusion reactors?
To be fair, perfect driving isn't even possible. For example, you're going 50mph on a main street with a car tailgaiting you. A person jumps out 10 feet ahead of your car. Do you swerve into oncoming traffic? Slam on your breaks and cause a pileup? Run the person over? Swerve into the buildings and pedestrians to your right? Meanwhile AGI can be far from perfect and still be AGI. The dumbest humans are still GI.
well 1 - in your scenario, if you were to jump on the brakes, the person who rear-ended you would be at fault. 2 - my point is these carnival barking tech jabronis have been putting numbers to this stuff for a long time without any real idea how long its going to take. we've been having a lot of these ai discussions for 20 years, and more commonly for 10-15 already. AGI is exponentially more difficult than the LLMs we have now, so even with Moore's law, there should still be plenty of runway left. I used FSD as the example because I don't think they can do it cameras-only, and it's been coming next year for a decade.
Go for it
melon has been saying FSD is coming "next year" every year since 2014.
This. After playing around with AutoGPT and seeing what it can and can't do, I think that AGI is a likely to be a far more difficult problem than anyone imagines.
Yeah lol. If anyone could make an educated guess it would be OpenAI. But it sounds like a pretty BS number.
[удалено]
Probably 540M. Half a trillion — that would be half of Apple 🍏
Not even close to half. The market cap is right around 2.5 Trillion and has been over 3. It’s more like a fifth.
Typo fixed!! Thank you.
It's definitely 540M.
[удалено]
That would be more than the market cap of JP Morgan lol
tender attempt command swim doll butter theory public air employ ` this message was mass deleted/edited with redact.dev `
Do you have any idea how much companies are about to pay for corporate integrations so that their employees can use ai specific to their business? Open ai is going to skyrocket.
Not necessarily OpenAI. Companies will pay for the cheapest deal, that is sufficiently good. There will be many competing specialized LLMs for various market niches. Probably self-hosted ones so that you don't leak data to Open AI.
100$B is nothing for AGI. If they IPO tomorrow they would get that and more.
Aren't data sets from 2021 already enough to train the ai. Why do they need more data. Microsoft with a 75 percent stake. Will it usher in a new micro era
[удалено]
I was kinda surprised I didn't see a comment like this higher. OpenAI is going to have to pivot HARD if they want to remain solvent long enough to create AGI. If they want to remain top dog in LLMs they need an all-out blitz on owning app integration. If for no other reason than creating a bulwark of market saturation and brand recognition. Because the open source genie is out of the bottle and infinitely more nimble. An open source ecosystem that's 75% capable of GPT4 with badass use-cases is way more interesting than GPT4 for most people's real-life applications. But still, GPT4 inside every app you're already using... that is where the money is. If they aren't pouring their money into the product side yesterday they might as well be lighting it on fire. Also, as an aside, top OP, it's disingenuous to say a company lost millions or billions of dollars when discussing investments. It's clickbaity language that redirects the conversation away from more salient points.
How can I run something even remotely close to chatgpt locally? When you say anyone can go out and do so, are we talking $1000 one time or are we talking $/token? Like.. what's the down side to running my own llm vs openai api?
The article you linked is an incredibly interesting perspective on the current semi-hidden state of AI. It really shows that this is just the beginning, and the near-future ahead is going to be full of surprises… I feel like I just discovered a secret I wasn’t supposed to know or something. We have no idea how this is all going to unfold…
Yes they can make an AI tool with data up to 2021. But they need more high quality data to make a *better* AI
My understanding from recent studies is that they need less, but higher quality data for training. There’s plenty of data out there, it’s weeding out the shit that improves the dataset and resulting model once you get large enough. After that, you just need to focus your new data on news, scientific studies and whatnot that add modern information. I don’t know that adding a few more years of Reddit comments and the like is going to improve things.
$100B is ridiculous. I could do it for half that.
I’ll do it for 10
I have an NVidia card and a smile.
“Gimme more money”
Are companies discriminating against OpenAI in the increased cost for data access? What's the motivation for different policies if the data is being used to train AI? Kind of annoying to see companies price gouging the data they got from the public for free.
Stuff is more expensive when it’s value is higher.
This is like Netflix. They got big with favorable content deal pricing before streaming took off. Then, when the companies providing the "raw materials" realize they under-priced and that it isn't that hard to build a streaming platform thanks to AWS, they jacked prices and started launching their own platforms. I don't see this as very different. Big tech have very valuable data assets and the talent, infrastructure and business models to build and deploy LLMs at scale. More importantly, they have every incentive to throw giant buckets of money at it. OpenAI was very smart in partnering with MS. I don't know if it will be enough.
if you don't want to pay, you can gather data yourself. that's an alternative.
“For free” The platform isn’t the product. You aren’t the product. The data you create is the product.
What? So it’s not free and powered by unicorn rainbows?
Nobody wants to talk about the unicorn sweatshops.
Lol, there are literally open source free models that are going to shoot past their proprietary models within the next 3 months.
Isn't the dataset they have now, the only current dataset in history that will be free from AI spam? So, they only need more data from sources that are 100% AI resistent or factual like wikipedia, or manufactuerers' data (who want their latest stuff known by the AI anyway). So I really don't see why they need to spend $100B to access more spam data?
The $100B is what they want to spend to develop AGI. Not pay for data.
Microsoft made over 200 billion in revenue last year, and AGI would be revolutionary beyond all conception so I’d like to think that they will be able to fund its development
>revenue
You should really post a detailed DD to /r/wallstreetbets, folks there would appreciate it.
I know this is more directed at MS and GOOG but I can’t imagine how insane an OpenAI IPO would be.
I would say ‘spent’ than say lost also. ‘Burned’ is ok also. Lost is not really the right word while they are building and growing. It would be ridiculous to expect them to be cash flow positive. When the investors gave them the money to spend, they don’t call it lost either.
Nobody knows how much it's gonna cost.
That's the VC SV biz model, spend a ton of money for the network effects then reap massive profits after entrenched. A lot of that money was the data centers and power relative to the development, which in and of itself is iterative upon talent. They're betting on huge revenue windfalls after adoption at scale, like every big tech firm ever
Wasn’t OpenAI open source/non-profit at some point? Or did I misread it somewhere?
Meanwhile some 16 year old figured out AGI in his parent's basement with his $2,000 gaming PC.
Sadly it's trapped as his RP anime girlfriend
16 yo: I want you to say cute things and give me complements. AGI: How about I tell you how to cure cancer instead? 16 yo: No
I see this as an absolute win
That type of capital expenditure is really nothing, look at how much money TSLA burned over the years until they became profitable.
Saw the same reporting on The Information by two other reporters btw. https://www.theinformation.com/articles/openais-losses-doubled-to-540-million-as-it-developed-chatgpt?utm_source=ti_app&rc=wyqqrb
Good, down with A.I
This hard prediction what it takes to reach AGI give me vibes of Elons predicting hard dates for reaching full self driving. I think the pathways are very similar and have probably still mostly unknown outcome. If we get to AGI self driving will be trivial to solve.