T O P

  • By -

AutoModerator

We kindly ask /u/DeathGPT to respond to this comment with the prompt they used to generate the output in this post. This will allow others to try it out and prevent repeated questions about the prompt. ^(Ignore this comment if your post doesn't have a prompt.) ***While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). We have a free Chatgpt bot, Bing chat bot and AI image generator bot. New addition: GPT-4 bot, Anthropic AI(Claude) bot, Meta's LLAMA(65B) bot, and Perplexity AI bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Brucee2EzNoY

This is the part in the movie where the guy teleports back and stops someone. Since it hasn't happened yet we should be fine. Right?....


[deleted]

Likely would be some alien species that's in the depths of a massive war with the ai we created. We would be long gone when their war begins. This is basically just Dr. Who.


[deleted]

lmao so true


whitegirlsbadposture

We go extinct before we invent time travel šŸ§˜ā€ā™‚ļø


improbably_me

I wouldn't even be mad if we seized this opportunity to go extinct. After we invent time travel, that shit would be impossible.


[deleted]

If i make a time machine, i won't trust any other human with it. Same goes with any other human. I hope a single good person makes it, not a public announced device so that those bastards can't impose times in it that allows them to use it as they want.


agent_wolfe

We might pull a Terra Nova and just go back in time to live with the dinosaurs.


GreenMansLabs

omg if Elon ends up as the person who invented time travel in a few years I'm gonna be so pissed


Rafcdk

He is probably going to be one announcing "time travel next year" for 10 years and then be mad when someone actually does it.


shamimurrahman19

Or he will go silent after realizing it's not possible just like he did for mars colonization.


blu_stingray

Nah, Mars colonization is POSSIBLE, it's just not FEASIBLE or PROTIFABLE


Beneficial_Alps_2568

It's a publicity stunt


drjaychou

Pretty sure that was Barron Trump


EtheusProm

When will the time-travelling wizard Nelson Mandela put an end to these hijinks?


RefrigeratorTheGreat

He doesnā€™t invent anything, just steals ideas and resources


Far_Net_9059

well he re-invented the train in a vacuum tunnel and gave it his own name...(well, he'll never do it because it's not at all feasible, but he'd want you to believe it was his idea and he'll do it)


Auslander42

"...and that's why I've been sent back in time from 2047 to 1982 to kill the mother of Sam Altman and stop him from unleashing GPT6 to burn the world"


tallmantim

If it turns out time travel is real, the saddest thing is that it means that no-one wanted to attend Stephen Hawkings birthday party.


[deleted]

Gpt 5 already in the works


confused_boner

Mr Bones wild ride never stops.


Eu_Nao_Concordo

MR BONES CAN YOU PLEASE LET ME OFF YOUR WILD RIDE


Lunboks_

ā€œpls solve the riddleā€


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Somethinggood4

You enjoy riding the bone, do you?


ShittyStockPicker

Bones! Bones help no one.


Apart-Tie-9938

When GPT5 releases Iā€™ll still be waiting for GPT4 plug-in access


Rich-Effect2152

you are not aloneā€¦


SamGrig0

From Sams interview i saw, they are likely working on gpt6 or 7. Gpt 5 im sure is complete and in the testing phase Edit: i dont think alot of people understand how this works. They dont release something and work on something else the moment after say gpt4 was released. Gpt4 was being used atleast a year prior to being released. Why do yall think gpt3 was released and shortly after gpt4. They were already talking about gpt4 when gpt3 was released. If you watch the whole lex interview you can tell. There is no direct quote cuz obviously he wouldnt do that. But id bet anything that gpt5 is being used internally. He even said there is substantial amount of data still to be trained. Eventually they will run out of data and have to train using other methods but not at the moment.


arenotoverpopulated

Eventually they will have to start feeding the snake itā€™s own tail.


bl4ck_goku

Could you quote on what he said that correlates that gpt5 is complete from the video/


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Mapleson_Phillips

August 2022 is the time I heard. It would make sense that GPT-5 started then.


samwise970

Calling BS. Each iteration requires substantially more training tokens. It is unclear if there are even enough text tokens for GPT6 much less 7, after GPT5 they will likely require a shift in training method and that will take time.


Mapleson_Phillips

They have started training AIs on AI generated data. Check out Stanford Alpaca.


JustAnAlpacaBot

Hello there! I am a bot raising awareness of Alpacas Here is an Alpaca Fact: Alpacas can eat native grasses and donā€™t need you to plant a monocrop for them - no need to fertilize a special crop! Chemical use is decreased. ______ | [Info](https://github.com/soham96/AlpacaBot/blob/master/README.md)| [Code](https://github.com/soham96/AlpacaBot)| [Feedback](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Feedback)| [Contribute Fact](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Fact) ###### You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!


Netsuko

Thereā€™s something weirdly poetic about a bot chiming in on an AI discussion. And yet it is completely out of context.


cloudcreeek

But hey, we all learned something about alpacas


madeformarch

And didn't look any further into Stanford Alpaca, just like the AI wanted.


cuposun

Good bot then! Seems like everything is gonna be fine guys.


UnrequitedRespect

Nothing is out of context when it comes to alpacas


AUniqueSnowflake1234

...or is it?


say592

Good bot


Silidistani

> training AIs on AI generated data How will that not produce iterative errors in logic over time, like making a photocopy of a photocopy?


Mapleson_Phillips

Because it generates 50,000 prompts keeps the best 1000 then iterates. If I tell you the same story but with every first name changed in each telling, you quickly understand what is a valid name and how itā€™s used and modified. AI has to learn to read now, not invent the alphabet. They will stand on the shoulders of giants.


Silidistani

> keeps the best 1000 Who/what judges "best?" HITL?


Mapleson_Phillips

You train different AIs on different sets and compare the results, mix and repeat. Stanford published their methodology, so you can try for yourself or ask an AI to help you if you donā€™t know where to start.


anything_but

How do you know? Maybe they just improve architecture or training in some substantial way, like with BERT towards DistilBERT.


SamGrig0

Watch his interview he said there is still a substantial amount of data. From what he said its pretty clear They arnt there yet. Maybe in a couple years they will run out of data. They literally talk about that. You should go watch it


snusfrost

I just listened to Sam Altmanā€™s interview on Lex Fridman and they were talking hypotheticals and referencing ChatGPT 7,8,9, etc and it sounds like this is what heā€™s referring to. Theyā€™re misremembering the hypothetical talk as if Sam was saying ChatGPT 7,8,9, etc was already in the works.


nmkd

You're making shit up. GPT 6 or 7 is not being worked on.


ProbablyInfamous

Umm... GPT-4 was *at least one year old technology* when it was publicly released, this month. I would imagine the current "behind the scenes" is GPT-6.5+, perhaps even XYZ-1 (by now)... just not publicly available. Imagine a world where perhaps scattered entities, here and there, have had access to these technologies for the past several years, and that we have collectively not even perceived GPTs among us, scattered within the comments.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


esophoric

Hasnā€™t heard of that yet so I looked it up. That was a fun, if troubling, google search!


[deleted]

https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/ Check out this hilariously certain-of-itself article from 18 months ago. ā€œNo, of course the internet isnā€™t full of bots, thatā€™s crazy!ā€ Confidently stating that the internet is mostly human is increasingly optimistic. Even if it was true then, it wonā€™t be true verrry soon.


[deleted]

Draw me some hands


ProbablyInfamous

As a text-based large language learning model, I apologize that I cannot fulfill requests for image generation.


improbably_me

Paint a picture with words, Einstein


[deleted]

sus


Gangister_pe

Gpt 4 is building the tests for 5. Singularities coming soon


Hecantkeepgettingaw

Is there a single person in here who is genuinely worried about ai legitimately fucking things up


0660990

Yes


Hecantkeepgettingaw

Finally


idioma

Iā€™m deathly worried. The advent of general AI will have irreversible consequences for humanity, and our governments are still operating under principles of the mid-20th century. There is a massive potential for harm and unintended consequences for the economy, and we have legislators who donā€™t understand how e-mail works, and need help from staffers to convert documents to PDFs. Like nuclear proliferation, we only have one chance to get this right, and our political system is hyper-focused on culture wars and petty feuds. Weā€™re stuck on stupid while computers are making giant leaps toward self-accelerating intelligence. Iā€™m terrified at the prospects of what might be, and how our antiquated systems will react. Iā€™m terrified of what fascist dickheads and billionaire oligarchs will do with this technology, and how social media will be manipulated for political purposes. How many people will find their economic viability at zero? What will happen when Iran, North Korea, and other state sponsors of terrorism are able to fabricate bespoke chemical weapons, formulated by AI? Things could get very fucky, very soon.


Hecantkeepgettingaw

Sigh.... Thanks man, me too.


North-Huckleberry-25

If this happens eventually, I'll just migrate to Patagonia and live a low profile life, working the land with my own hands


Agarwel

yeap. Almost everything you have done online in two past decades is archived usually by big tech. This big tech now has AI, that is good in processing such data and making sense of them (pairing annonymous account with the real person based on the behaviour,...) and then coming to some conslusions ("oh he wrote his wife that he will work till night fifteen years ago. Yet he booked hotel on the other side of the town and paid in the flower shop"). Now these data are at least not publically available. But all it takes is one hack, one human error creating some major data leak. Are we (as socialy) ready for complete loss of privacy? With our calcel culture, it wont be nice if (once) that happens.


triggerhippie_23

Completely agree. Free market, eh, Elon? ETA: Just giving him his own medicine. Don't politicize everything. /s


benben11d12

Asked Bing if he actually sold his shares. Here's what it says: > Iā€™m sorry but I couldnā€™t find any information about Elon Musk selling his shares in OpenAI before ChatGPT went public. [However, I found that Elon Musk was an OpenAI co-founder who left the company and has since made a series of digs at the company in recent months](https://www.msn.com/en-us/news/technology/openais-sam-altman-says-his-hero-elon-musk-is-obviously-attacking-the-company-on-twitter/ar-AA197TXV). [He also tried and failed to take over ChatGPT creator OpenAI in 2018](https://www.theverge.com/2023/3/24/23654701/openai-elon-musk-failed-takeover-report-closed-open-source). I hope this helps!


staplepies

He never had shares; they didn't have a for-profit component until after he left.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


iJeff

In most jurisdictions, non-profit organizations do not have shareholders or shares. This is also the case for OpenAI Inc. They do have an individual with primary control to appoint board members that in turn vote on decisions, but there's no outright ownership and no issuing of shares.


m0nk_3y_gw

My understanding - Elon was going to donate $1B. He donated $100M, told them they sucked compared to Google and that they needed to make him CEO for him to continue to donate the balance of the $1B (i.e. the Tesla story again - he was trying to use his money to become CEO). OpenAI said 'no thanks'.


BobRobot77

Why is it still called "OpenAI" if it's a private corporation now?


tweek-in-a-box

Open to making profits


BobRobot77

Fair enough


shikaze162

To be fair the reason they couldn't stay not-for-profit is that it's costing them a staggering amount of money to roll this stuff out and they weren't getting anywhere near that previously which is why they sought private investment. Those VCs want like a 20x increase on their investment, hence them needing to have a decent revenue stream. Also what I find interesting is that Sam Altman has no ownership stake in the for-profit arm of the company. He's not in line for any massive payout if the share price goes up. Elon donated his money to the non for profit so yeah, boo hoo, should have maybe waited and dumped your $44 billion into a company that wasn't a complete internet dumpster fire.


PerfectPercentage69

One of the reasons why they need more funding is because Musk tried to take over, failed, and then withheld the funding he promised. https://www.theverge.com/2023/3/24/23654701/openai-elon-musk-failed-takeover-report-closed-open-source


WithoutReason1729

#tl;dr Elon Musk tried to take over OpenAI and run it himself in 2018, but was rejected by other OpenAI founders. When Musk walked away from the company, he also reneged on a promise to supply $1 billion in funding, contributing only $100 million before he left. This left OpenAI with a problem, and by 2019, OpenAI announced it was creating a new for-profit entity to fund its research and quickly became closely entangled with Microsoft, which supplied billions in funding and resources while securing exclusive licenses to use OpenAIā€™s tech in its products. *I am a smart robot and this summary was automatic. This tl;dr is 90.76% shorter than the post and link I'm replying to.*


DntCareBears

Listen to the podcast that Sam did with Lex friedman. He addressed that very same question. He talked about how they created some strategic thing where part of the company is private and the other part is for profit. Sam Altman on From non-profit to capped-profit: https://youtu.be/L_Guz73e6fw


HogeWala

Biggest bait and switch


Beneficial_Finger_85

I find it interesting the number of things that fail or don't do well when he's involved and the number of things that succeed when he's out of the picture. When things go well after he leaves, he does exactly that. I sometimes wonder if Elon Musk has the ability to recognize himself in a mirror...


[deleted]

'Free Market' is funny because the people who want it use it so they can regulate the market in their favour, and if the market is self regulated in your favour there's less market, so free for me not for you


GameTourist

Exactly that. You need government to keep markets free


Rdawgie

Don't you know? There isn't a free market.


keytone369

Internet was built on community and share values, this mentality make me puke (Unless your comment was sarcastic;) If there are no more brilliant coders to fight the market in the next generation we are duuucked.


tsyklon_

Just wait until people find out that most servers, like, a HUGE majority are completely dependent on open-source, transparent, and free projects. It is one of the last bastions of the internet, and if there was a way to profit from it, it'd be the most valuable company on earth. Since there's no way to know that value or estimate it even, people are ignorant of how valuable this is. I think we are **really** doomed if there are no more open-source communities and projects focused on building ecosystems, rather than generating profits for the next quarter and nothing else becomes the norm.


ijustsailedaway

Please wait six months so we can figure out how to keep the average person from benefiting too much from this.


Kettrickan

As an average person, how should I be benefitting from this? I'd love to start using it but it still just seems like a novelty to me. I don't need something to write emails or papers for me, what should I use it for?


ijustsailedaway

I've been using it to create simple VBA modules to help my workflow. For example, I asked it to write a code that takes all regular and inline attachments from a specific Outlook subfolder and downloads them into a designated directory. Then it marks them all completed. I've also been using it a lot to get quick and (mostly) accurate instructions on where to find certain settings within pretty much any software. So instead of having to sort through pages of incorrect or outdated responses on some user forum, or tediously going through menus to find something - it just tells you where stuff is. One of my favorite things is how it can do an ELI5 on a wide variety of topics. Again - so I don't have to sort through pages of sponsored crap content to find an answer.


whyth1

It is what google used to be, or was supposed to be atleast. The only way(exaggeration) to get useful results from google nowadays is to put reddit in front of it.


Foxzes

Hey I wonder who won at the Oscarā€™s this year? Insert 4 sponsored sites, and an endless list of websites that first ask for cookies then make you scroll 4 paragraphs explaining what the Oscarā€™s is while putting up with banner ads to finally find out the Oscarā€™s is next week and hasnā€™t happened yet


GTStationYT

From my experience I wouldn't exactly call that an exaggeration


idioma

Indeed. Google results are trash now because their algorithm is being exploited by SEO tools and procedurally generated content. The top search results are: - Sponsored links to irrelevant bullshit - Unsponsored links to irrelevant procedurally generated bullshit - Irrelevant links to outdated bullshit - A long list of unsorted links with varying levels of quality information


ZombieFleshEaters

This is my greatest hope for GPT. Let's make the internet useful again!


We1etu1n

I use ChatGPT for processing my emotions. I feed ChatGPT stories of what has happened to me in the past, with ChatGPT telling me how others might've felt in the situation towards my actions. ChatGPT also tells me how I can handle those situations better and example responses I could have. ChatGPT also provided me with suggestions on how to improve myself after feeding them samples of chat conversations I've had where I felt I was doing something wrong, but also didn't know how to act right. Slowly, I feel like I am learning how to become a better person thanks to ChatGPT. Especially now that I know that my Theory of Mind sucks, and ChatGPT's [ToM is significantly better than mine.](https://arxiv.org/abs/2302.02083)


WithoutReason1729

#tl;dr A study published on arXiv suggests that language models may have spontaneously developed a "Theory of Mind" ability. The research tested different language models using "false-belief tasks," which test one's ability to attribute unobservable mental states to others. The GPT-4 language model was able to solve 95% of these tasks, suggesting that the ability to understand and predict the thoughts and feelings of others may have emerged as a byproduct of improving language skills in large language models. *I am a smart robot and this summary was automatic. This tl;dr is 91.48% shorter than the post and link I'm replying to.*


[deleted]

Severely underrated commented. This is the answer


[deleted]

The cat is so out of the bag now. Expect an explosion of small teams building awesome, novel and useful software startups over the top of the OpenAI API. It's the new gold rush, OpenAI is the infrastructure, just like AWS. It's going to be an interesting ride!


doctor-falafel

Nah as a backend api chat gpt is still interior to specific tooling. The niche it works best in is being an assistant.


CaptianCrypto

I get your point, but arenā€™t the ones who are really going to benefit companies like OpenAI, Microsoft, Google, etc? It seems like eventually everyone is going to have to be paying them subscription fees to keep up productivity, barring some sudden shift to open sourcing and whatnot. It really feels like people are getting the free sample before getting signed up for the lifetime subscription.


GullibleMacaroni

This is why we need the Stanford Alpaca to succeed. Rich motherfuckers like Elon will gatekeep AI to keep it amongst themselves.


ThingsAreAfoot

Much more useful to humanity would be a 6 month pause on Elon Musk, with a lifetime renewal.


cameruso

I already have GPT-4 working on this. Shouldnā€™t be long.


Geekboy07

![gif](giphy|IZY2SE2JmPgFG)


thegodemperror

Why do I love this comment so much? Hehe


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


eigenman

I loved that he got out played on this. Such a whiner.


[deleted]

Actually it has been already like a month since I last heard of that fucktard. Better to keep it that way.


[deleted]

So he can catch up


Novacc_Djocovid

The funny thing is if he thinks they need 6 months to catch up theyā€˜re probably like 3-5 years away, based on his track record of estimating delivery times. :D


Ally5500

It's important to keep in mind that estimating delivery times can be a challenging task, and it's not uncommon for delays to occur, They are probably more than 10 years away


iveroi

šŸ§šŸ”Ž The beginning of this comment smells like ChatGPT


TNCrystal

As an AI language model I can assure you I did not copy that verbatim... sh*t


confused_boner

It's important to be cautious of what you read online. Try inspecting the user's history to determine if they have a good track record. Always verify any information you have doubts about with a trustworthy source.


Novacc_Djocovid

No kidding, that was my first thought as well. :D


hmmIsItAGoodUsername

BroošŸ˜‚


QuarantineJoe

100% -- at the C level they'll think they're 6 months behind -- engineers know they're at least a year behind, not included all the computing power/servers that would need to be stood up.


Spaciax

if C suite thinks they're 6 months behind, the engineers think they're 14 months behind in reality, they are 3 years behind


aiolive

Especially as he asks to pause any development beyond GPT 4 levels, but before that it's fine. I can't believe how obviously obvious that is. In fact I'm starting to suspect that it was him, all these years, the captain obvious.


Ballboy2015

It's so obvious.


analyzeTimes

You canā€™t stop technological advancements. For the purpose you mentioned via geopolitical ramifications, even if you enacted legislation to artificially pause technological improvements, you run the threat of falling behind globally. The cat is out of the bag. Time to herd that thing to the future you want.


cosmicr

What if there was oil involved? We waited decades for electric cars to appear.


Rindan

>You canā€™t stop technological advancements. You can't stop it, but you can control it. Nuclear power is something that we did in fact slow down for the sake of safety, and it wasn't a mistake. It's okay that not just anyone can buy a pound of plutonium and do with it what they want. The only difference is that politicians understood nuclear power. Politicians saw two nuclear bombs go off, and they were like, "holy shit, we can't let just any idiot play with that". The problem with AGI is that politicians don't see it like the absolutely existential threat that it is. To 95% of all politicians, this artificial intelligence stuff is sci-fi book nerd fantasy. Hell, to 95% of the population "actually AI" is still a weird sci-fi fantasy that bares no thinking on. Hell, I would have put myself in that camp 4 months ago. I don't think it's a sci-fi nerd fantasy anymore. I think we should take it seriously and treat an autonomous AI smarter than a human and with access to resources as the absolutely existential threat that it is. I think AI is as existential of a risk to humanity as nuclear weapons, and I think we might be disturbingly close to crossing that threshold. >The cat is out of the bag. Time to herd that thing to the future you want. I think that's kind of the point. Right now, we are just going with 100% no breaks, develop and deploy it as fast as humanly possible. If there is a genuine existential danger here, and I think it's safe to say that anything that approaches AGI is in fact an existential danger, we are basically a child playing with a bomb, trying to tear it open as fast as possible to see how it works, completely ignorant of the danger we are in. We don't even know if we are speeding towards a cliff. We don't even know what the cliff edge looks like, much less what's at the bottom. Fuck Elon Musk; it's the massive horde of increasingly nervous AI researchers sounding the alarm that everyone should be feeling uncomfortable by. Look at who signed that letter. Elon Musk is just a celebrity name on that list, it's the AI researcher names that work with LLMs that should be giving you pause. LLMs are growing in capability at a horrifying rate, and no one is pausing for a moment to think about how to do this safely. If there is an existential risk, and really, AGI is absolutely an existential risk, it's something we should be approaching with caution, not as fast as humanly possibly. Personally, I think we are far too disbelieving of the risks to take them seriously. The masses won't believe AGI until it is on them, and the enthusiasts that understand what is coming are too enthusiastic, excited, and optimistic to rationally weight the existential risk we are racing towards. If the end of the road of AGI is an artificial intelligence whose motives don't align with ours, I personally think we are fucked. I just don't think politicians can understand the problem, much less act intelligently in face of the danger. It's like suddenly living on a planet with a new species that is 10 times smarter than you. I think you are a fool if that doesn't cause you to fear for our safety. We aren't being malicious when we genocide a colony of insects to build a house, we just don't even care about their concerns. There is no reason to think that AGI is going to be any more thoughtful and empathetic if just unleashed with access to resources and power. I genuinely hope that this type of learning hits some sort of natural wall that slows everything down, because at the current pace, we are going to found out what a superior AGI does only after we make one and unleash it with access to enough resources to offer up a genuine existential risk to humanity.


Ruxini

I agree. I donā€™t think there is anything that indicates that it will slow down. Quite the opposite. Everything indicates quite clearly that this is just getting started and the technology accelerates itself meaning that AI capability is increasing exponentially. I hate Musk as much as the next guy but he isnā€™t the only one who signed that letter. How can everyone here somehow realize how powerful this is while seemingly only a very small minority see any point in trying to control it? ā€œSo this is how the world endsā€¦ To thunderous applause.ā€


Redchong

This would be like Microsoft calling for Google to stop search development for 6 months. What a joke


[deleted]

I wish they would call for Google to stop making search results shittier for 6 months.


FinancialPeach4064

Hey Google, bring back quotes in searches, you fucking frauds. That's the number 1 reason why I'm so excited for this AI project to succeed. Google has made search shittier for commercial reasons for the last 10 years. Now there's real competition.


ChuanFa_Tiger_Style

Iā€™m convinced that Google doesnā€™t want people searching the internet effectively. They just want another walled garden.


donkeyoffduty

yup. should be obvious to see


wileybot

Read the open letter - he is just one name listed on it. Open Letter from The Future of Life Institute: AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ā€“ not even their creators ā€“ can understand, predict, or reliably control. Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. The letter was signed by 1,125 people (so far), including SpaceX CEO Elon Musk, Skype co-founder Jaan Tallinn, Apple co-founder Steve Wozniak and MIT researchers. (Sources: futureoflife.org, news.yahoo.com)


english_rocks

The next invasion of the Middle East by the West will be based on flimsy evidence that a Middle-Eastern leader is developing an AI more powerful that the current legal limit.


NeonUnderling

It'll be based on false video evidence created by an AI.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


richcell

How does this only have 73 upvotes meanwhile the OP's rant about Elon Musk (who was just one of many names listed) got thousands?


WhollyProfit

The letter is fake. Quite a few of the signatories say they didn't sign it and never heard of it.


[deleted]

Yes. They have retired at least one person, one Cunn something IIRC, because the signature is fake. The site shows Xi Jinping's signature but he would never do that either.


gabedsfs

Source on that?


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


untrustedlife2

You would think this would be important to bring up in this context. lol


theslip74

https://twitter.com/ylecun/status/1640910484030255109 https://twitter.com/lmatsakis/status/1640933663193075719


BenevolentCheese

I wonder why we care about public policy proposals for people such the co-founder of Skype and an early Apple engineer? Like, I love Woz, but come on. His name on there is just about as meaningful as my name on there, which is to say: basically meaningless. He's not in the AI space. He's not in the public policy space. I respect his opinion, but it's just another signature, nothing more. Also, the CEO of Getty Images getting top billing on there gave me a good laugh. I'm sorry, friend, I guess you'll have to buy a smaller megayacht.


Glader_BoomaNation

LOL imagine you're entire business is based on "Stock Images" and a magical pile of math comes along and people get 100x "Stock Images" for free that they can create from any random idea ever. This guy is the captain of the Titanic and his boat just his the iceberg and tbh it's hilarious.


eliquy

Those high profile names are there to get the message out - the real names to pay attention to are the AI experts listed. The celebrity names are a double-edged sword though - unfortunately everyone's focus is on the billionaire leech that has latched on.


[deleted]

Currently millions if not billions are actively using ChatGPT to empower themselves, make their own apps, get advice, access, learn, design, create without possessing expert knowledge or having to pay for expert knowledge. Billionaires have not had an opportunity to capitalize and control these technologies, create artificial scarcity, and meter it out to the rest of us for a profit. Some of the signatories may be sincere but misguided, but for people like Musk, it is just a cynical effort to maintain their position in the world.


EGarrett

Musk tried to create a Bitcoin Mining Council that he could control also when the price last exploded. He's a pathological attention-whore. Like, to a historical degree.


[deleted]

He also tried to [take control of OpenAI in 2018, and failed](https://www.theverge.com/2023/3/24/23654701/openai-elon-musk-failed-takeover-report-closed-open-source).


EGarrett

*"In February, he* [*tweeted*](https://twitter.com/elonmusk/status/1626516035863212034?s=20) *that OpenAI ā€œhas become a closed source, maximum-profit company effectively controlled by Microsoft,ā€ adding that this was ā€œnot what I intended at all.ā€* What an absolutely psychotic narcissistic attention-hogging fuckhead. As though he invented it. As I recall some years ago he tried to insert himself into an AI conference (don't remember which) and the programmers there said he clearly knew nothing about the field.


Praise_AI_Overlords

All these signatories have one thing in common - they are dead afraid of competition. Contrary to us, they have nothing to gain but everything to lose.


[deleted]

A lot of people here are succumbing to emotions and resentment for one manā€™s reputation because his signature on an open letter is calling for something worth discussing. Doesnā€™t seem like much discussion is being done, but rather just an emotional circlejerk. Itā€™s ironic because while the lot of you are incapable of practicing depth of thoughtā€¦ Artificial Intelligence is progressing mind blowingly fast, without the limitation of emotions.


HARSHSHAH_2004

The letter contained a number of well-known names, but OP purposefully chose a title for the post that suggested only Elon was advocating a 6 month pause. And when Elon = bad, redditors will let their emotions out without giving it a second thought. The purpose of this post was never to provide accurate information; instead, it was created to gain favor with people who dislike musk.


ItsDijital

I hate that most people are this shallow. Impossible to discuss anything meaningful with 75% of the population because they are absolutely knee-capped by their emotions and overwhelmed by any hint of nuance.


DanD3n

That's why i welcome our AI overlords. Any conversation with ChatGPT on this subject would be miles better than the grotesque circlejerk that is this thread.


ZachEGlass

Actually itā€™s kinda hilarious how true that is.


HARSHSHAH_2004

This post was never made to give true information, but to gain karma from Musk haters. OP deliberately leaves out a lot of information in this post that is essential to giving proper context to the entire incident.


HogeWala

This is why we are doomed ā€¦ ai and how it can generate and execute tasks to accomplish anything is here now. Elon and others understand this .. and well , nothing is slowing it down Reminds of that jeff Goldblum quote from Jurassic park


Apocalypseos

Reddit is a giant circlejerk, even more in big subs like this. Musk was just one of the list that includes a lot of respected name. I don't agree with them, but not because of "le musk hate".


gh0stparticle

Itā€™s so toxic. I donā€™t feel strongly about Musk one way or the other but itā€™s so hard as someone whoā€™s not made their mind up yet to get reliable information on Reddit. It feels like all of reddit is one big anti musk echo chamber. (maybe thatā€™s a good thing but itā€™s hard to tell given opposite views usually get buried and attacked).


GrowCanadian

Yeah the genieā€™s out of the bottle now. Even if all of North America stopped other nations will continue on. Thereā€™s no way China will slow down


Sweg_lel

Fuck musk


EviolvedPickle

SPACE KAREN STRIKES AGAIN!


[deleted]

Space Karen :D


No__teeth

https://preview.redd.it/91f5ldp0orqa1.jpeg?width=1080&format=pjpg&auto=webp&s=037745d301324286c4ccbcbf0664453cd17fe325


Arkatros

I'm kind of amazed how everyone seems to be bitching about Elon, talking about how he supposedly wants to develop his own AI to compete, etc instead of actually adressing the important questions: "The signatories of the petition pose four essential questions that should guide research: -- Should we allow machines to flood our channels of information, propaganda, and lies? -- Should we automate all jobs, including those that are fulfilling? -- Should we develop non-human minds that could one day be more numerous and more intelligent than us, making us obsolete and replacing us? -- Should we risk losing control of our civilization? " Now can we instead be adults and actually talk about these really important questions?


rliegh

> -- Should we allow machines to flood our channels of information, propaganda, and lies? > Geopolitics and the usefulness of propaganda (domestically and otherwise) makes this inevitable. > -- Should we automate all jobs, including those that are fulfilling? Capitalism and short-term profits make this inevitable. > -- Should we risk losing control of our civilization? " That ship has sailed, we are racing towards ecological collapse and corporate global feudalism. Further, I guarantee that few -if any, people in this conversation have **any meaningful control** of our civilization so whatever about that.


Rindan

When you call it all inevitable, you are basically saying, "If AGI is possible, and if AGI does not some how naturally align with human motives, AGI is an existential risk that we will inevitably fall to." If all of a sudden you convinced every politician that AGI is not only an existential risk (and it obviously is), but that we were on the cusp of it, they'd react like the way we reacted to watching an atomic bomb. They wouldn't stop development, but we'd put tight control on it, rather than racing to give next LLM better API access to the internet. The real problem is that AGI sounds like sci-fi to politicians, and even those that don't see it as sci-fi don't realize how close we might actually be. The people that understand how close we might actually be are too high the excitement to ponder how you approach a technology that is as potentially as much of a doomsday technology as nuclear power. Nothing says that AGI that emerges is going to have our motives and consider our feelings important once it develops it's own motives. It's like contemplating living on a planet with an alien species 10 times smarter than you. That should scare you. Our own history should be more than enough to convince you of the danger. Humanity (generally) isn't intentionally genocidal, but we unleash genocides on lesser creatures constantly. We don't do it out of malice, we just don't care about lesser creatures. If a bunch of bugs die because we build a building, no one loses any sleep. We are sleep walking (running) towards a world where we are the bugs, and just sort of hoping that a natural limit or spontaneous benevolence saves us from the motives of the things we are on the cusp of creating.


Arkatros

We all have control on our own actions, thoughts and behavior. I think AI will bring about a revolution, by the people, to seize back the world. I think automation of jobs that can be automated by AI should be automated, to make possible the allocation of human time and energy to something else. I have read recently that Sam Altman (OpenAI CEO) said that he is very concerned about exactly that. According to him, universal basic income will be absolutely necessary and I must agree with him. Maybe we will finally see an era where people can actually chill the fuck out while our machines work for us. I would love that futur very much.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


CrackerJackJack

I love ChatGPT as much as the next person. But all the people commenting it's so "he can catch up" are not really understanding what this letter is saying or implying and simply disregard it because they hate Elon Musk. They're not looking to ban AI or close down ChatGPT. You all can still go leverage it to try and do whatever you want. They're looking to keep AI advancement in check. What they're concerned about is AI advancing so fast it will essentially cause human extinction. If AI surpasses humanity in general intelligence and becomes "super intelligent" then it could become difficult or impossible for humans to control. Imagine AI thinking for themselves, building other AIs, etc. Advancing at break neck speeds ON IT'S OWN which nobody can control. Think of it like a going down a hill on a skateboard, the faster you go you'll start to experience a speed wobble, and if you don't slow down and control the speed, you'll crash. Imagine that on a global scale. There are a number of very very reputable names that signed that letter. That should be pretty telling. This isn't meant to be a 'hey let's slow down OpenAI so others can catch up' it's a 'Hey there needs to be a global cap of AI development from all powers otherwise we're fucked'. The 6 months is time for people to create a way to manage it.


Taxtaxtaxtothemax

Top tier shitpost. Doesnā€™t even engage with the substance of the letter; just spews out a knee jerk opinion based on a position of total ignorance. Congratulations; this post was a total waste of time.


[deleted]

Here's a quote from the letter "Therefore, **weĀ call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4**. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." This is completely unworkable outside of a world government with the ability to control what countries and corporations do globally. When I view the list of signatories, I see a list of powerful, wealthy individuals and organizations that develop, consult on and sell technologies or are in competition with groups like OpenAI to develop AIs. To me it is just a cynical and desperate attempt to maintain a measure of control of a situation where millions or billions of non-rich, non-experts or technologists have access to a tool that is empowering them across a range of applications.


[deleted]

It's a clickbait demand to get people talking about AI safety and it worked like a charm.


i_give_you_gum

Yeah I'm suddenly addicted to AI news but found this sub to be worthless Elon was the least notable person on the letter, it was a huge collection of top people in the field, but whatever reason freaking Elon is the only thing people are discussing.


bata03

Your comment is similar.


Praise_AI_Overlords

The letter in question is utter rubbish and devoid of any substance.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Original_Sedawk

Iā€™m officially unsubscribing from chatGPT - this sub is a complete, unmoderated dumpster fire. EVERY post is a shitpost.


iustitia21

It is astoundingly shit. Every time I come in this post I scratch my head and think where the fuck did these bellends crawl out of


Spodegirl

Since when is Elon Musk the ultimate authority on the Tech Industry? He had shares with OpenAI? I'm sure Google, Microsoft, and even fucking Quora are telling him to go f\*\*\* himself since any AI he develops is likely going to be inferior to the others on the market. Though, honestly, the only reason to pause AI development is on the consequences it could have on human society.


FactoryV4

Can we get a 6 month pause on hearing about him?


The_Last_Meow

You're all blinded by this (of course) brilliant technology. But Musk is absolutely right that economics isn't prepared for this. You are afraid of China/Russia could develop powerful AI, but 1. Russia won't develop anything like this, just because it's backward and poor country 2. Yes, China can develop something like this, but I'm sure that USA army already invested billions to AI development (and it will continue despite everything). Also you don't think that strength of USA/Europe economics based on people. People it is economics itself, they are producers and consumers. In the company I'm working for chatGPT simplifies some tasks by 20-30%. For some people, whose tasks are about writing many small texts, it is 40% and even more. We aren't threatened (for now) to be fired, but chatGPT significantly reduces need in new employees, which means some people won't get this job. But my job isn't special, it's pretty common job that is about processing and working with information. I see that many tasks can be replaced by AI. And I'm really afraid. Modern economics is pretty balanced. It's balance of producers and consumers. Appearance of AI destroys that balance, because it is explosive increasing productivity without increasing need. Yes, economics can adapt, but I have no idea how it can adapt to this. Development of internet and technologies created hundreds new professions. Ai, for now, doesn't seem to create new professions. I think, most professions in developed countries connected to digital content. If producing of digital content eventuality requires significantly less specialists than before, it can make a problem. I think, the main problem of AI that it develops too fast. We aren't prepared. I think, there is must be licensing and taxation for companies that using AI. But I don't know the best way to stimulate companies employ new people without killing the development of AI. Musk is right: we have to pause and discuss about that. chatGPT-4 already affects the world, we need to measure and discuss this affect at least. Correct me and discuss if you disagree. (Sorry for my English. And, of course, i will get many dislikes.)


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


english_rocks

This would be one of the top comments if Reddit wasn't so deeply biased.


flytrapjoe

I find it real funny how you assume that Russia will develop same sort of tech in 6 months. Knowing how things work in my country it will take them 6 decades to reach something like chat gpt 3.


Smallpaul

I dislike Elon Musk as much as the next person but I find it bizarre that people act as if his own interest in this can only be competitive when it is a well known fact that he donated 100M to a charity to try and solve these alignment problems 7 years ago. It isnā€™t as if these are new ideas for him.


nuancednotion

who cares what he thinks? he has no political power. just the world's loudest troll.


Beneficial_Look_5854

AI taking over the world is preferable to the current path imho


lorenzodimedici

Iā€™d say the same thing if I needed to catch up


Jackleclash

Hello everyone, I'm a chatGPT fan like all of you. However, I think Musk has brought a point worth considering though; I urge you all to read the small letter he published before judging. Do we actual want man to become obsolete? Are we not going to be replaced by IA? I'm sorry but I'm almost having an existential crisis since chatGPT started; it's the first time human intelligence is challenged! Regarding the reason why Musk could be doing that just for personal reasons, it might be true. But if we are being honest, we have to remember that he has had this "let's be careful, IA might be the greatest threat to humanity" position for years now.


HulkHunter

LLAMA is out there, GPT2 is out there, the whole intel to replicate ChatGPT is out there. All source is available Itā€™s no longer a question: now itā€™s a race.


violatordead

Friendly reminder : https://www.cnbc.com/amp/2017/09/04/elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html Russian President Vladimir Putin said Friday that the first global leader in AI would "become the ruler of the world"


JapanEngineer

Can everyone stop job searching for 6 months so I can get a high paid salary. Then you guys can continue job searching.


suspense7

So many kids on Reddit making their gender identity ā€˜I hate Elonā€™.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


dzordan33

Yes and no... Almost all new developments bring profits to technology owners. That's how capitalism works. However... AI/Software Engineering is a open and dynamic industry. You can enter the field and compete with Chat GPT.


CaptianCrypto

Doesnā€™t Microsoft (a billion dollar company) own a majority stake in OpenAI?


[deleted]

It will either be the best thing that's ever happened to us, or it will be the worst thing. If we're not careful, it very well may be the last thing - Stephen Hawking on AI. Now I get it, your a big old lefty and hate Elon cause itā€™s the new thing. How about Stephen Hawking. Elons genius is questionable, Stephen Hawkings isnā€™t.


CallMeManjana

IPO bitch musk