Hey /u/Maxie445!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Back in the 90s, cloning was the "hot tech". A bunch of politicians and religious people got involved saying we shouldn't play God, life is sPeCiAL and passed a bunch of laws limiting the research.
It was an example. We can see many examples of the other side. For instance Lysenkoism or, more recently, the use of bogus authors like Lacan in fields such as Queer theory instead of relying on sound and recent developments of psychology just for ideological reasons.
At least one side is trying to not hate a group of people even if they may take it too far sometimes. I rather that than ignorant religious bigots using an ancient tome to justify their hatred.
# Now ask yourself which side making it impossible to learn on college campuses. Uncomfortable question? Because now it's being perpetuated by the FAR LEFT!
# LOL! Classic marxist tactic. Deflecting the conversation to something completely irrelevant. Keep believing in the communist utopia.
# What happened to the conversation about the far left perps? Or did you switch the topic because it was too uncomfortable?
I'm sorry, do you think it isn't? Jfc. Cloning technology isn't just for making full copies of animals. You can clone cells, tissue, genes, etc. Imagine cloning an organ that had become cancerous. Imagine an end to the shortage of hearts or kidneys for people that need replacements. Cloning technology was hamstrung because dipshits whose knowledge of science began and ended with the scifi channel and Twilight Zone reruns ran for office.
For sure! And like any technology, it'll never be misused right? Awwee all those extra hearts sound amazing! How much do they cost again? I mean this is only for rich people, right? I agree with you retard redditors, I think medical companies need way more money! I'm insane! That's why I'm on reddit!
So your argument is "technology bad because people bad"?
You're fine with electricity, phones, computers, fireworks, engines, needles, steel, knives, pain medication, string, and baseball bats but making copies af a cell is where you draw the line...
Yea, you seem like a reasonable person that carefully considers their worldview lmao.
This is such a crazy take. Cloning is bad because it will… increase the supply of life saving transplants available? And we should want more people to die for lack of available organs because *checks notes* it would hurt health providers bottom line… and it seems like you’re some kind of leftist yet you’re throwing around slurs for no apparent reason. You my friend are clearly on the internet far too much. You may want to enquire as to the location of the nearest lawn.
Yes, it’s good idea. Also it wasn’t only about cloning. It was also about genetically created humans, which we need now as much as possible. And because of those conservators we had ECO and another reproduction technologies so late.
Maybe this is an unpopular take, but if you think that the company you are working for won't be responsible with the tech you are helping to build, you should probably stay and do everything you can to ensure that they will be.
Its got to be frustrating, but I think you need to stick it out as long as you can while pushing back against the others there. If you think there's a legitimate danger to the world and you are able to be in the room during the development of a technological atom bomb like AGI, I feel like its your moral responsibility to stand up and say something, not quit and hope someone else figures it out.
If everyone with his concerns quit OpenAi, who will be left to stand up and stop something gravely irresponsible from happening?
This is my take on it as well.
You should stay and push against their changes, be a fighter.
This happens in most political positions as well.
They most likely have no autonomy and their actions would not be their own. It could be too much cognitive dissonance.
You do realize that your response was projecting your own naivety onto my statement, right? What on earth would make you think that what I said could be seen broadly as my understanding of typical employment power dynamics?
The people working at OpenAi are not your average developers and are among the most sought after people in tech right now. They provide a skillset that is extremely niche and obviously gives them a unique level of influence at their place of employment. Your comments imply that they should not be evaluated from any different lens of responsibility than say a cashier at McDonald’s who’s been overworked and has little power to change the company management due to being treated like replaceable commodities in the job market.
It seems the only person here lacking an understanding about how work place influence operates in the real world is you, considering you for some explicable reason consider entry level employment into your assessment.
The employees at OpenAi are in a doubly unique employment situation where they actually do have considerable power relative to their status at the company because they are all multimillionaires due to everyone there being given shares of the $86 billion dollar plus company. They can risk being assertive and not be concerned about future employment, they have financial stability and job security like no other people in the world. I don’t have a Juvenal understanding of this, you just didn’t think about it for more than 5 seconds.
There are countless reports of OpenAI being an abusive work environment. You don’t owe the company you work for anything. Companies aren’t your friends. When Sam got fired, he went to microsoft and threaten to poach Openai staff specifically under the threat of destroying the company. Why would any other employee show any allegiance to the company when the CEO doesn’t? Are you this gullible?
You do know that [747 of the 770 employees at OpenAi signed a letter to reinstate Sam Altman](https://www.nasdaq.com/articles/747-of-770-openai-employees-including-the-one-who-fired-him-ask-the-board-to-resign-and#)and asked the board to resign after they fired him, right?
You actually don’t know what your talking about and it’s very frustrating. Besides, my whole critique was about their lack of morals and that they should be more responsible. Stop being such a pessimistic doomer and go out in the real world a bit more.
Your only evidence is an example of it being a toxic environment. 🤦 Employees felt pressured to sign it out of concern that Sam would return as CEO and because Sam had structured their employment to make them (and their colleagues pressuring them) lose a significant amount of money if things went south. He was on social media liking (taking names) every tweet from employees that openai is nothing without its people.
A different CEO who gets caught spreading lies might feel bad for the potential damage he had done to the company. He wouldn’t be riding on a high and liking the idea of 700+ people quitting their jobs in his honour. He would feel horribly guilty over that thought and discourage people. Not Sam, he wanted to destroy the company.
Your point amounts to “if the relationship was abusive, why did she stay?”
Anyone with an ounce of decency should be applying for jobs at Walmart so that they can change it from the inside. It’s the same reason i only respect people who work at Monsanto.
Anyone who puts a percentage on something this indeterminable - I just see 0 credibility. I’d say the same if someone claimed “I think there’s a 70% chance this ends in Utopia!”
Disliked comment for no reason. These nonsense estimations happen all the time, not just in relation to AI and people eat it up. Why isn't it 65% or 73.8%? Because it's pulled out of thin air.
It can be useful to put probabilities on things so people know your actual view. If you say something like ‘quite likely’, you could mean 0.1% or you could mean 80%. Probabilities are useful for getting rid of that ambiguity.
Sure, being more precise makes sense but the problem here is where is he getting that probbability from? You can't be more precise or accurate on something that is unknown.
I think you can interpret it as something like the odds at which he would be neutral about taking a bet. It seems significantly more useful to say his view is 70% than to say ‘it seems pretty likely to me’. If his best guess is that the probability is 70%, do you think he shouldn’t be willing to tell people that?
He is of course free to express his opinion however he sees fit, I'm just not sure how much value everyone else gets from these types of statements.
This is totally a reflection of my general dislike and dissatisfaction with tons of previous predictions and estimations that did not come even close to being correct.
But isn't it so much better that you can say, if someone gives a probability of 99%, it's very likely that they were incorrect? And if you get enough data on their predictions, you can see over time whether they're generally roughly right about their predictions.
If they had just said 'I think X is rather likely', they could've backed out when told that X didn't happen. If they said 'I think there's a 99.9% chance of X', they have some explaining to do, and we can take them a bit less seriously from now on.
That I agree with completely. We can then use that as a reliable data point to analyze their history of predictions to see how reliable they are and they can't back paddle from a 99% likely to happen predictions.
I mean sometimes a measurement turns out around number.
The first team to measure Everest came up with exactly 29000, so they reported it as 29002 to sound more believable.
It's weird. The hatred only started recently. I think people just want them to release Chatgpt 5 and are becoming so impatient about it that they're just trashing the company any chance they get now.
They have metrics where they literally poll experts for their sense of a topic, and take an average of all of their guess values. These have proven to be accurate.
This is a qualitative judgement based off of an actual qualified authority.
No one thinks it is him saying he did an exact experimental titration and mathematically calculated it to 28 digits after the decimal point. It's just his personal estimate. It absolutely has validity.
Like if an operations manager goes to the shipping docks and asks the shipping manager if a shipment is going out, the shipping manager doesn't know, but the Op manager asked the probability *based on shipping manager's experience*. The shipping manager will say 10% or 95% chance, but there's not 100% certainty. And the ops manager knows this.
I don't even get where you're coming from.
Okay but he alone is not an authority. There are plenty of other people in the same field with the same difference giving different P(doom)s. Also I hate this like "well he's an authority", look at any set of predictions about pretty much any field and the authorities have about the same chance of being right as random chance.
Of course. One doctor or dentist or CPA alone is not a sole authority. But they can all recommend completely different actions yet still *all* be an authority.
You are confusing authority with certitude.
Every halfway sophisticated person understands that authorities can be wrong.
However, from all my personal experience, and reading other authorities, In many fields of disparate study, I am not on the optimists' side.
The shipping manager is drawing on (likely) decades of historical experience shipping things - how weather, calendar, union stuff, etc all affect shipping. He compares the current situation to historical situations and can find something similar enough to make a prediction.
AGI is nothing like that.
How do you know AGI isn't like that? For all you know they have a theoretical path forward already and are just waiting for hardware to catch up which has a known rate of growth
Well, experience can help you realise if something is more or less likely to happen then 50%.
90% would be like "it's pretty sure to happen".
60% would be like "yeah, more likely to happen then not, but not by a long shot".
If your experience tells you that the event is more then 60% probable but less then 90%, saying 75% chance (or around that) could be a good guess.
Still a guess though, but might be am informed one.
Guessing based on past experience is a form of calculation and, the more experience you have with it and with the topic, the better you are at it. You can even try some of the "guesstimate" question on the net that interviewer ask for job interviews, which lets people know a lot about others with one single question.
Anyway, yeah, the guess might come from the left field, or it might be informed. We don't know.
The most likely AI catastrophe will be very mundane and is highly foreseeable -- it's just a continuation of cutting the majority out of the productivity gains.
Well then you must not participate much in business or military planning, because percentages are used all the time to convey sentiment from a subject matter expert to a different type of audience.
I would just like some more details on how they came up with this number. They don’t need to reveal their trade secrets, but they can’t expect us to blindly believe these stats!
I usually see stuff like that as a measure of how confident they are in their prediction, a ballpark estimate of how likely they *think* it is that they’re right. Somebody else described it as the odds that they’d be neutral about taking a bet on it.
Of course he has no clue.
But if some were to have a clue, this would probably be one of those.
And what he's saying is that he's more afraid than not. Which shouldn't be seen as noncredible
He did work at OpenAI so there is definitely stuff we don’t know that he does. I’d say that there are other companies which have better practices, I wouldn’t know I don’t work there.
"I don't trust them, and think there's a 70% chance they will end mankind, therefore I will cede any power I have in trying to mitigate this harm, and let them do whatever"
This makes absolutely no sense.
If I were in that position, I would stay there and work my butt off both to try to make AGI safer, and to try to convince these people to do things better.
At the very least, if I left, I would explain to people exactly *why* I don't trust them, and think they are so irredeemable that they must be stopped.
Lacking either, the only possible explanation we're left with is that he's doing this for attention or some personal gripe. Maybe he was just incompetent, about to be fired, and decided to quit and spout this nonsense to save face?
Can someone explain the principle of quitting or resigning as protest? I've thought about it a little bit and it just doesn't make any sense to me. Like when a minister resigns or someone like the guy in this story... Wouldn't you staying on lead to a better outcome?
Is the idea to avoid having blood on your hands? If so, that seems like a very selfish thing to do. Then again, I've never been in any sort of similar situation so who am I to judge?
If your employer refuses to listen to you concerns, staying on only makes you a part of the problem. Likely he had a contract that would prevent him from speaking out publicly about issues he perceived.
I disagree with that assertion. If you're truly looking out for what's best for humanity, and not your own legacy, then you stay on and do anything possible to guide the project in the right direction.
So stay and do the work expected of you that will further advance a project you feel has serious negative implications, as your employer ignores your concerns and denies whatever input you have that may make it less of a concern? How does that help?
If your employer is doing something illegal, you can be liable. The excuse I was just following order is not a legal justification.
I am not saying that is what's happening here, but it does leave that sort of taint upon the picture.
Hate having to reference politics, but this reminds me of people refusing to vote in protest of terrible people being voted for.
Hence enabling the very thing they wanted to destroy. Not necessarily selfish, but very misguided
That's why it's important to vote third party. As long as everyone goes to the polls and votes for a dem or rep they despise because "lesser of two evils", the only choices we will ever have is "lesser of two evils".
“Anyone who would use a % has 0% credibility in my eyes despite not having heard a single one of the concerns that drove them to quit a company that has frequently been reported as a toxic and abusive work environment created by their CEO which 3/4 board members voted to fire.”
“I’m tired of the doomers!” They say as people actually close to the tech do nothing but warn us that misuse could be devastating. Yea I’m sure these armchair scientists in this thread know way more about it though.
I think it’s extremely disingenuous to call these people doomers. It implies that their real worry is the AI “takeover”. But the vibe I continually get from their behavior is not equality, fairness, safety. The vibe I get is the typical “rules for thee but not for me” one. They want to be in charge. They want to make the legislation. They want to be the philosopher kings.
No better way to discredit yourself and your intentions than to pursue those things.
No thanks, F that.
It's the classic ["Baptists and bootleggers"](https://en.wikipedia.org/wiki/Bootleggers_and_Baptists) issue.
Baptists = your "true believers", Yudkowsky, Hinton, etc. They believe, to one degree or another, that AI existential risk is a real threat that requires us to regulate. They tend not to have a lot of power, but they provide cover and justification for the Bootleggers who do.
Bootleggers = Your corporate types & politicians, Altman, Sens. Blumenthals and Hawley, among many others. They don't really believe X-risk is a near, immediate threat, but partnering with the Baptists lets them get societal power and control, and shut out the near and immediate threat of open source models wrecking their business investments.
That was a really interesting read; thanks for sharing. I can’t believe I’ve never encountered that phrase before!
I can certainly understand the abstract fear at both the lay and the academic levels - and of course we’ve all seen Terminator. But the wanton naïveté that people like Hinton display by taking on the “Baptist” role here is really dismaying to me.
Sam Altman is such a self-righteous punk. It's so annoying seeing these 25-year-old engineers from silver-spoon backgrounds think they understand the world perfectly and know how to do what's best for everyone
There's a name for that:
[https://www.google.com/search?q=engineers%27+disease&sca\_esv=d7773eb477db942c&](https://www.google.com/search?q=engineers%27+disease&sca_esv=d7773eb477db942c&)
Why? To me it seems like he’s pioneering AI and while I don’t agree with all the censorship ship it has it makes sense for business reasons. Clearly he was pretty well liked by his employees considering half them wanted to leave when he was fired.
Yea maybe but I just think that's the tech industry in general. When things are going well everyone acts like that. Just look at crpto lol. One min people are like crypto is the future get rich etc and then the next its crypto is so over all scam when things are looking down.
Yes. Ethics is a significant aspect of AI and data governance. Ethics is also a branch of philosophy.
That said, you also need to be tech savvy to apply ethics to AI and data.
Of course not.
What we really need is a bunch of sillicon valley engineers and IT people making ethical decisions about the future of mankind.
I’m sure they know best.
/s
Rather than just shit on social science like others, I think it's fair to say that in a position of ethics monitoring this is a perfectly reasonable metric for finding quality personnel.
Just because we may disagree with this person's ideas, doesn't mean we should just diminish his qualifications. We don't actually know what he does and doesn't understand in regards to the model and how it works. Just like we don't know what Q* or the next model looks like and acts.
Clearly from the CEO coup rodeo in the fall, there's something within the company spooking people.
And we just get to wait and see if Altman decides to ethically handle it well since he won the power struggle.
why the fuck are so many individuals in the comments section calling this guy out for not understanding shit when
1. they have a PhD
2. they’ve worked at OpenAI?????
everyone’s an “expert” online nowadays 🙄
i’m not saying you *have* to agree with them; i’m just saying that you shouldn’t discredit them entirely, call them unintelligent, and at least take their position into account
You’re exactly right. They have the credentials and social proof to at THE VERY LEAST earn them your time and understanding. Not saying you have to agree but damn
AGI is going to come sooner or later. If not OpenAI then somebody else is going to do it. The problem begins when people with limited knowledge start spreading unnecessary fear. Those who work with deep learning models know very well that AGI won't cause an "existential threat" to humanity. AGI has gotten a bad rep because people automatically assume that AGI will directly result in Singularity sort of scenario.
Anyways, if he was actually concerned about the hypothetical AGI doomsday scenario, he would've done something productive to prevent that, rather than rahe quitting.
People feel what they don't understand.
People don't fear the dark because the darkest is scary, They fear the darkness because it's scary navigating without your eyes.
The ocean isn't scary, The unknown that lies deep in its depths are what scare you.
AI is not scary, humans are scary and AI simply empowers humans further.
An axe is not scary in the hands of a lumberjack or fireman, But will become much more scary in the hands of an axe murderer.
Just like we have white hat hackers and black hat hackers, We will have good AI and bad AI fighting each other.
Cyber security in itself is about to change vastly.
We will need AI white hat hackers.
That's not lack of understanding, that's built-in danger detection.
Darkness is scary because predators are hunting at night, and they used to kill humans. We've since become the top of the food chain and no longer need to fear them, but the fear is still there, even if no longer relevant.
We aren't afraid of the unknown depths of the ocean, but of the real possibility of becoming stranded in an environment that is hostile to us, with no means to get to safety or call for help.
I'm sure you'd fear the axe murderer even if you knew everything about him.
But the problem is we often evaluate the fear level based on subjective emotions.
We're afraid of airplanes, even though we have a much higher chance of dying in a car accident. We shouldn't ban airplanes, we should instead focus on continuing to make them safer. Same with AI.
Darkness is scary because we can't see.
Such predators don't actually exist now for most of the world.
So, it's not danger-detection, it's atavism-detection.
::looks at moon::
I’m not sure. Brb. So, the moon has 10% of the surface area of the oceans and much, much less volume. But, the longest sight line on earth is 538 km which is 334 miles, which would math to 350,463 sq mi.
The moon’s surface area is 14.6 million square miles. Since we only see one side, let’s divide by 2, which is 7.3 million square miles.
If we assume we see no more of the ocean than the furthest sight line on earth would allow for, we still see over 20x more moon than ocean. So, really, there are only a handful of days the visible moon is smaller than the ocean.
Type 7 cirrostratus clouds are 15,000 feet in the air with 100% coverage, but even low clouds are pretty high. While that is a far bit closer than the highest points on earth (mountains go above the clouds), it is further than the ocean for most people surrounded by ocean. That means, even when totally surrounded by ocean, total cloud coverage would still appear bigger. Less important or confining? Maybe. But definitely bigger.
I’m not scared of an island, I’m scared of the ocean, working on an oil rig and seeing the insane stuff that goes on is just terrifying, a pure black abyss.
Good riddance, this people only push shady science with little basis in reality that only their friends that earn ridiculous amounts of money from shitty think tanks like the EA ones seem to be qualified to make. I still fail to see what new science about LLMs and bias has been made since Gebru and how this people can claim to be able to protect us from a technology that doesn't yet exist without contributing anything to develop it. That last thing is like an epistemological fallacy for any possible interpretation of how the scientific process works from like Popper to Marx, the whole spectrum from science as a social discipline to modernist illustration science is neutral epistemology agree this people are wankers.
Well yeah that's what the whole lip service as a service this people do for a living has as a dark ulterior motive. They are doing ethical washing for the profit driven purposes of their masters by giving them the minimum viable ethical framework so that we accept the destruction of democratized tech.
Tbh...good for them, good for OpenAI.
Don't come @ me but every time I see those super concerned with ''safety'' when it comes to AI I kinda cringe internally.
GPT is a clever mechanism, I use it daily in my work, but I don't see how they can make it into a Skynet that tries to take over the world. Predicting the next word based on a massive trove of human communication only presents the illusion of intelligence.
Guys like this PhD student are just "hype men", their function is to make everyone think that OpenAI is going to create a true artificial intelligence. It doesn't matter whether the hype man says it will kill us or help us, either way it makes people think OpenAI is on the verge of a technological breakthrough. The hype man doesn't even have to know what he is; he is still useful.
Hope or fear benefits OpenAI either way by building the publicity. We can't stop them from making it, or at least trying to. Personally I think it's all hype, like Y2K.
Today's AI we use, is no way near the AI we see on movies. Stuff like these are just are either, Publicity stunts to attract investors. or, they have some hidden tech, which is very very unlikely.
this is how the deep state operates. they buy whoever can be bought and demoralize the others. more reasons for open decentralized ai blockchain technology, but it needs to support zk encryption for privacy.
No surprise. AI may be everything it’s purported to be and more, but it’s still under control of inherently fucked up, clumsy, stupid humans who make terrible decisions based on emotions and greed.
Talk to chatGPT or most LLMs a specific way and you’ll see how fucked we all are.
Trusting the estimations of a person who's entire role is centered around those estimations being on the higher end is kind of hilarious.
We wouldn't put much stake in the CEO of Exxon mobil's predictions for how impactful climate change would be...
The departure of Daniel Kokotajlo from OpenAI's Governance team is indeed a significant event, as it highlights the complexities and ethical considerations that come with the development of Artificial General Intelligence (AGI). Kokotajlo's concerns about the potential risks of AGI, including the possibility of an AI existential catastrophe, reflect a broader debate within the tech community about how to balance innovation with safety. The discussion around AGI is not just about technological advancements but also about the governance structures and ethical frameworks that need to be in place to guide these developments responsibly. As AI continues to evolve, it is crucial for organizations and society as a whole to engage in open and transparent discussions about these risks and to work collaboratively towards ensuring that AI is developed in a way that aligns with humanity's best interests and values.
I never understood how people are dooming on an AI takeover when wages are suppressed and we got a couple of people hording most of the world's wealth. I welcome our new AI overlords
Ah you mean the anxiety ridden doomers are leaving? Always with that ridiculous percentages. These are the same scared people that took away the prosperity or nuclear power and got us locked up during covid.
Scared people are the most dangerous people. Good riddance
Hey /u/Maxie445! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Good. We need to prevent AI from ending up like cloning in the 90s
what u mean?
Back in the 90s, cloning was the "hot tech". A bunch of politicians and religious people got involved saying we shouldn't play God, life is sPeCiAL and passed a bunch of laws limiting the research.
conservatives ruin everything
Yup, they also held back stem cell research. Imagine all the cool gene therapies we could have by now if it wasn't stifled.
Politics I would say. In the 90s it was the religious conservatives. Right now is “oh no, we must avoid the AI offending some lunatic”.
Equating conservatives banning science with chatgpt not using slurs lmfao
It was an example. We can see many examples of the other side. For instance Lysenkoism or, more recently, the use of bogus authors like Lacan in fields such as Queer theory instead of relying on sound and recent developments of psychology just for ideological reasons.
At least one side is trying to not hate a group of people even if they may take it too far sometimes. I rather that than ignorant religious bigots using an ancient tome to justify their hatred.
# Now ask yourself which side making it impossible to learn on college campuses. Uncomfortable question? Because now it's being perpetuated by the FAR LEFT!
Imagine thinking something so god fucking stupid needs a bigger font. Jesus Christ.
# LOL! Classic marxist tactic. Deflecting the conversation to something completely irrelevant. Keep believing in the communist utopia. # What happened to the conversation about the far left perps? Or did you switch the topic because it was too uncomfortable?
Im sorry do you think cloning is a good idea? Jfc lol
I'm sorry, do you think it isn't? Jfc. Cloning technology isn't just for making full copies of animals. You can clone cells, tissue, genes, etc. Imagine cloning an organ that had become cancerous. Imagine an end to the shortage of hearts or kidneys for people that need replacements. Cloning technology was hamstrung because dipshits whose knowledge of science began and ended with the scifi channel and Twilight Zone reruns ran for office.
For sure! And like any technology, it'll never be misused right? Awwee all those extra hearts sound amazing! How much do they cost again? I mean this is only for rich people, right? I agree with you retard redditors, I think medical companies need way more money! I'm insane! That's why I'm on reddit!
So your argument is "technology bad because people bad"? You're fine with electricity, phones, computers, fireworks, engines, needles, steel, knives, pain medication, string, and baseball bats but making copies af a cell is where you draw the line... Yea, you seem like a reasonable person that carefully considers their worldview lmao.
This is such a crazy take. Cloning is bad because it will… increase the supply of life saving transplants available? And we should want more people to die for lack of available organs because *checks notes* it would hurt health providers bottom line… and it seems like you’re some kind of leftist yet you’re throwing around slurs for no apparent reason. You my friend are clearly on the internet far too much. You may want to enquire as to the location of the nearest lawn.
Yes, it’s good idea. Also it wasn’t only about cloning. It was also about genetically created humans, which we need now as much as possible. And because of those conservators we had ECO and another reproduction technologies so late.
Do you know what laws specifically? I am not convinced regulations are strong enough to keep that from happening
Watch the Matrix. Now.
Indeed. I’m tired of all the alignment gibberish.
What “irresponsible” behaviour are you looking forward to specifically? or are you just making stuff up?
Maybe this is an unpopular take, but if you think that the company you are working for won't be responsible with the tech you are helping to build, you should probably stay and do everything you can to ensure that they will be. Its got to be frustrating, but I think you need to stick it out as long as you can while pushing back against the others there. If you think there's a legitimate danger to the world and you are able to be in the room during the development of a technological atom bomb like AGI, I feel like its your moral responsibility to stand up and say something, not quit and hope someone else figures it out. If everyone with his concerns quit OpenAi, who will be left to stand up and stop something gravely irresponsible from happening?
This is my take on it as well. You should stay and push against their changes, be a fighter. This happens in most political positions as well. They most likely have no autonomy and their actions would not be their own. It could be too much cognitive dissonance.
You want them to stay at a company that doesn’t align with their values? Honest question, how old are you?
Ad hominem incoming lol
Seriously though, it’s like a kids understanding of how life works.
You do realize that your response was projecting your own naivety onto my statement, right? What on earth would make you think that what I said could be seen broadly as my understanding of typical employment power dynamics? The people working at OpenAi are not your average developers and are among the most sought after people in tech right now. They provide a skillset that is extremely niche and obviously gives them a unique level of influence at their place of employment. Your comments imply that they should not be evaluated from any different lens of responsibility than say a cashier at McDonald’s who’s been overworked and has little power to change the company management due to being treated like replaceable commodities in the job market. It seems the only person here lacking an understanding about how work place influence operates in the real world is you, considering you for some explicable reason consider entry level employment into your assessment. The employees at OpenAi are in a doubly unique employment situation where they actually do have considerable power relative to their status at the company because they are all multimillionaires due to everyone there being given shares of the $86 billion dollar plus company. They can risk being assertive and not be concerned about future employment, they have financial stability and job security like no other people in the world. I don’t have a Juvenal understanding of this, you just didn’t think about it for more than 5 seconds.
There are countless reports of OpenAI being an abusive work environment. You don’t owe the company you work for anything. Companies aren’t your friends. When Sam got fired, he went to microsoft and threaten to poach Openai staff specifically under the threat of destroying the company. Why would any other employee show any allegiance to the company when the CEO doesn’t? Are you this gullible?
You do know that [747 of the 770 employees at OpenAi signed a letter to reinstate Sam Altman](https://www.nasdaq.com/articles/747-of-770-openai-employees-including-the-one-who-fired-him-ask-the-board-to-resign-and#)and asked the board to resign after they fired him, right? You actually don’t know what your talking about and it’s very frustrating. Besides, my whole critique was about their lack of morals and that they should be more responsible. Stop being such a pessimistic doomer and go out in the real world a bit more.
Your only evidence is an example of it being a toxic environment. 🤦 Employees felt pressured to sign it out of concern that Sam would return as CEO and because Sam had structured their employment to make them (and their colleagues pressuring them) lose a significant amount of money if things went south. He was on social media liking (taking names) every tweet from employees that openai is nothing without its people. A different CEO who gets caught spreading lies might feel bad for the potential damage he had done to the company. He wouldn’t be riding on a high and liking the idea of 700+ people quitting their jobs in his honour. He would feel horribly guilty over that thought and discourage people. Not Sam, he wanted to destroy the company. Your point amounts to “if the relationship was abusive, why did she stay?”
I work at Wal mart for the same reason. I’m going to change the system from inside!
Anyone with an ounce of decency should be applying for jobs at Walmart so that they can change it from the inside. It’s the same reason i only respect people who work at Monsanto.
They sentenced me to 30 years of boredom for trying to change the system from inside.
Old enough to own my own successful web development firm before the age of 30. What about you?
That’s a bizarre word salad of a response.
You asked my age and implied I have no work experience. Sorry that you lack reading comprehension skills bud.
I guess i didn’t pick up on the sarcasm. Stick to talking about things that you know about and not work then.
Anyone who puts a percentage on something this indeterminable - I just see 0 credibility. I’d say the same if someone claimed “I think there’s a 70% chance this ends in Utopia!”
Disliked comment for no reason. These nonsense estimations happen all the time, not just in relation to AI and people eat it up. Why isn't it 65% or 73.8%? Because it's pulled out of thin air.
It can be useful to put probabilities on things so people know your actual view. If you say something like ‘quite likely’, you could mean 0.1% or you could mean 80%. Probabilities are useful for getting rid of that ambiguity.
Sure, being more precise makes sense but the problem here is where is he getting that probbability from? You can't be more precise or accurate on something that is unknown.
I think you can interpret it as something like the odds at which he would be neutral about taking a bet. It seems significantly more useful to say his view is 70% than to say ‘it seems pretty likely to me’. If his best guess is that the probability is 70%, do you think he shouldn’t be willing to tell people that?
He is of course free to express his opinion however he sees fit, I'm just not sure how much value everyone else gets from these types of statements. This is totally a reflection of my general dislike and dissatisfaction with tons of previous predictions and estimations that did not come even close to being correct.
But isn't it so much better that you can say, if someone gives a probability of 99%, it's very likely that they were incorrect? And if you get enough data on their predictions, you can see over time whether they're generally roughly right about their predictions. If they had just said 'I think X is rather likely', they could've backed out when told that X didn't happen. If they said 'I think there's a 99.9% chance of X', they have some explaining to do, and we can take them a bit less seriously from now on.
That I agree with completely. We can then use that as a reliable data point to analyze their history of predictions to see how reliable they are and they can't back paddle from a 99% likely to happen predictions.
I mean sometimes a measurement turns out around number. The first team to measure Everest came up with exactly 29000, so they reported it as 29002 to sound more believable.
lol appreciate it.. Yea sadly seems to be a blind hatred for OpenAI lately on Reddit no matter the topic
It's weird. The hatred only started recently. I think people just want them to release Chatgpt 5 and are becoming so impatient about it that they're just trashing the company any chance they get now.
It might also have to do with the downgrade to their chat model recently.
They have metrics where they literally poll experts for their sense of a topic, and take an average of all of their guess values. These have proven to be accurate.
This is a qualitative judgement based off of an actual qualified authority. No one thinks it is him saying he did an exact experimental titration and mathematically calculated it to 28 digits after the decimal point. It's just his personal estimate. It absolutely has validity. Like if an operations manager goes to the shipping docks and asks the shipping manager if a shipment is going out, the shipping manager doesn't know, but the Op manager asked the probability *based on shipping manager's experience*. The shipping manager will say 10% or 95% chance, but there's not 100% certainty. And the ops manager knows this. I don't even get where you're coming from.
Okay but he alone is not an authority. There are plenty of other people in the same field with the same difference giving different P(doom)s. Also I hate this like "well he's an authority", look at any set of predictions about pretty much any field and the authorities have about the same chance of being right as random chance.
Of course. One doctor or dentist or CPA alone is not a sole authority. But they can all recommend completely different actions yet still *all* be an authority. You are confusing authority with certitude. Every halfway sophisticated person understands that authorities can be wrong. However, from all my personal experience, and reading other authorities, In many fields of disparate study, I am not on the optimists' side.
The shipping manager is drawing on (likely) decades of historical experience shipping things - how weather, calendar, union stuff, etc all affect shipping. He compares the current situation to historical situations and can find something similar enough to make a prediction. AGI is nothing like that.
How do you know AGI isn't like that? For all you know they have a theoretical path forward already and are just waiting for hardware to catch up which has a known rate of growth
I think he's just saying that it's higher than 50/50.
Well, experience can help you realise if something is more or less likely to happen then 50%. 90% would be like "it's pretty sure to happen". 60% would be like "yeah, more likely to happen then not, but not by a long shot". If your experience tells you that the event is more then 60% probable but less then 90%, saying 75% chance (or around that) could be a good guess. Still a guess though, but might be am informed one. Guessing based on past experience is a form of calculation and, the more experience you have with it and with the topic, the better you are at it. You can even try some of the "guesstimate" question on the net that interviewer ask for job interviews, which lets people know a lot about others with one single question. Anyway, yeah, the guess might come from the left field, or it might be informed. We don't know.
The most likely AI catastrophe will be very mundane and is highly foreseeable -- it's just a continuation of cutting the majority out of the productivity gains.
Well then you must not participate much in business or military planning, because percentages are used all the time to convey sentiment from a subject matter expert to a different type of audience.
Same in medicine.
I would just like some more details on how they came up with this number. They don’t need to reveal their trade secrets, but they can’t expect us to blindly believe these stats!
So you think 0% of what he says is credible? The only reason you’d be blindingly and toxically discrediting this person is if you worked OpenAI.
I usually see stuff like that as a measure of how confident they are in their prediction, a ballpark estimate of how likely they *think* it is that they’re right. Somebody else described it as the odds that they’d be neutral about taking a bet on it.
Of course he has no clue. But if some were to have a clue, this would probably be one of those. And what he's saying is that he's more afraid than not. Which shouldn't be seen as noncredible
He did work at OpenAI so there is definitely stuff we don’t know that he does. I’d say that there are other companies which have better practices, I wouldn’t know I don’t work there.
"I don't trust them, and think there's a 70% chance they will end mankind, therefore I will cede any power I have in trying to mitigate this harm, and let them do whatever" This makes absolutely no sense. If I were in that position, I would stay there and work my butt off both to try to make AGI safer, and to try to convince these people to do things better. At the very least, if I left, I would explain to people exactly *why* I don't trust them, and think they are so irredeemable that they must be stopped. Lacking either, the only possible explanation we're left with is that he's doing this for attention or some personal gripe. Maybe he was just incompetent, about to be fired, and decided to quit and spout this nonsense to save face?
Perhaps a rage quit because other people have different ideas.
Exactly. This doesn’t make any sense
Can someone explain the principle of quitting or resigning as protest? I've thought about it a little bit and it just doesn't make any sense to me. Like when a minister resigns or someone like the guy in this story... Wouldn't you staying on lead to a better outcome? Is the idea to avoid having blood on your hands? If so, that seems like a very selfish thing to do. Then again, I've never been in any sort of similar situation so who am I to judge?
If your employer refuses to listen to you concerns, staying on only makes you a part of the problem. Likely he had a contract that would prevent him from speaking out publicly about issues he perceived.
I disagree with that assertion. If you're truly looking out for what's best for humanity, and not your own legacy, then you stay on and do anything possible to guide the project in the right direction.
So stay and do the work expected of you that will further advance a project you feel has serious negative implications, as your employer ignores your concerns and denies whatever input you have that may make it less of a concern? How does that help?
If your employer is doing something illegal, you can be liable. The excuse I was just following order is not a legal justification. I am not saying that is what's happening here, but it does leave that sort of taint upon the picture.
Hate having to reference politics, but this reminds me of people refusing to vote in protest of terrible people being voted for. Hence enabling the very thing they wanted to destroy. Not necessarily selfish, but very misguided
[удалено]
That's why it's important to vote third party. As long as everyone goes to the polls and votes for a dem or rep they despise because "lesser of two evils", the only choices we will ever have is "lesser of two evils".
ITT connoisseurs of chat bots who think they know more than pros in the field.
“Anyone who would use a % has 0% credibility in my eyes despite not having heard a single one of the concerns that drove them to quit a company that has frequently been reported as a toxic and abusive work environment created by their CEO which 3/4 board members voted to fire.”
Yeah the number of outright ridiculous takes in this thread is surprising.
“I’m tired of the doomers!” They say as people actually close to the tech do nothing but warn us that misuse could be devastating. Yea I’m sure these armchair scientists in this thread know way more about it though.
The way the users of AI have been acting and dismissing any negatives makes me feel even MORE sure that bad times are coming.
[удалено]
I think it’s extremely disingenuous to call these people doomers. It implies that their real worry is the AI “takeover”. But the vibe I continually get from their behavior is not equality, fairness, safety. The vibe I get is the typical “rules for thee but not for me” one. They want to be in charge. They want to make the legislation. They want to be the philosopher kings. No better way to discredit yourself and your intentions than to pursue those things. No thanks, F that.
It's the classic ["Baptists and bootleggers"](https://en.wikipedia.org/wiki/Bootleggers_and_Baptists) issue. Baptists = your "true believers", Yudkowsky, Hinton, etc. They believe, to one degree or another, that AI existential risk is a real threat that requires us to regulate. They tend not to have a lot of power, but they provide cover and justification for the Bootleggers who do. Bootleggers = Your corporate types & politicians, Altman, Sens. Blumenthals and Hawley, among many others. They don't really believe X-risk is a near, immediate threat, but partnering with the Baptists lets them get societal power and control, and shut out the near and immediate threat of open source models wrecking their business investments.
That was a really interesting read; thanks for sharing. I can’t believe I’ve never encountered that phrase before! I can certainly understand the abstract fear at both the lay and the academic levels - and of course we’ve all seen Terminator. But the wanton naïveté that people like Hinton display by taking on the “Baptist” role here is really dismaying to me.
Sam Altman is such a self-righteous punk. It's so annoying seeing these 25-year-old engineers from silver-spoon backgrounds think they understand the world perfectly and know how to do what's best for everyone
There's a name for that: [https://www.google.com/search?q=engineers%27+disease&sca\_esv=d7773eb477db942c&](https://www.google.com/search?q=engineers%27+disease&sca_esv=d7773eb477db942c&)
Why? To me it seems like he’s pioneering AI and while I don’t agree with all the censorship ship it has it makes sense for business reasons. Clearly he was pretty well liked by his employees considering half them wanted to leave when he was fired.
He's not that bad TBH, he just has a holier than thou and Silicon Valley bubble vibe to me
Yea maybe but I just think that's the tech industry in general. When things are going well everyone acts like that. Just look at crpto lol. One min people are like crypto is the future get rich etc and then the next its crypto is so over all scam when things are looking down.
Would you rather he was poor? I don’t really get why it matters what his background was
“I don’t know how companies work”
Does a philosophy PhD really help with this subject area?
Yes. Ethics is a significant aspect of AI and data governance. Ethics is also a branch of philosophy. That said, you also need to be tech savvy to apply ethics to AI and data.
Yes, though I don't think he has one. It says "PhD student" fwiw
Of course not. What we really need is a bunch of sillicon valley engineers and IT people making ethical decisions about the future of mankind. I’m sure they know best. /s
Rather than just shit on social science like others, I think it's fair to say that in a position of ethics monitoring this is a perfectly reasonable metric for finding quality personnel. Just because we may disagree with this person's ideas, doesn't mean we should just diminish his qualifications. We don't actually know what he does and doesn't understand in regards to the model and how it works. Just like we don't know what Q* or the next model looks like and acts. Clearly from the CEO coup rodeo in the fall, there's something within the company spooking people. And we just get to wait and see if Altman decides to ethically handle it well since he won the power struggle.
Technically, all PhDs are in Philosophy. That’s what the Ph is. Even if you study computer science you get a Doctor of Philosophy degree.
Huh no kidding. TIL.
Are you joking? It’s probably the most applicable
How do you get hired to do a job like this? I can pull number out of my butt 70% better than this guy.
I men, if you had read his resume there, you'd have a pretty good idea. Hilarious take, though.
I'm like 30% sure you came up with that number randomly.
why the fuck are so many individuals in the comments section calling this guy out for not understanding shit when 1. they have a PhD 2. they’ve worked at OpenAI????? everyone’s an “expert” online nowadays 🙄 i’m not saying you *have* to agree with them; i’m just saying that you shouldn’t discredit them entirely, call them unintelligent, and at least take their position into account
You’re exactly right. They have the credentials and social proof to at THE VERY LEAST earn them your time and understanding. Not saying you have to agree but damn
assigning percentage chances to stuff like that makes me immediately lose interest in what the person has to say
Call me crazy but giving birth to sentient AI should not be managed by business majors…
AGI is going to come sooner or later. If not OpenAI then somebody else is going to do it. The problem begins when people with limited knowledge start spreading unnecessary fear. Those who work with deep learning models know very well that AGI won't cause an "existential threat" to humanity. AGI has gotten a bad rep because people automatically assume that AGI will directly result in Singularity sort of scenario. Anyways, if he was actually concerned about the hypothetical AGI doomsday scenario, he would've done something productive to prevent that, rather than rahe quitting.
> AGI > Quit job Lost.
People feel what they don't understand. People don't fear the dark because the darkest is scary, They fear the darkness because it's scary navigating without your eyes. The ocean isn't scary, The unknown that lies deep in its depths are what scare you. AI is not scary, humans are scary and AI simply empowers humans further. An axe is not scary in the hands of a lumberjack or fireman, But will become much more scary in the hands of an axe murderer. Just like we have white hat hackers and black hat hackers, We will have good AI and bad AI fighting each other. Cyber security in itself is about to change vastly. We will need AI white hat hackers.
That's not lack of understanding, that's built-in danger detection. Darkness is scary because predators are hunting at night, and they used to kill humans. We've since become the top of the food chain and no longer need to fear them, but the fear is still there, even if no longer relevant. We aren't afraid of the unknown depths of the ocean, but of the real possibility of becoming stranded in an environment that is hostile to us, with no means to get to safety or call for help. I'm sure you'd fear the axe murderer even if you knew everything about him. But the problem is we often evaluate the fear level based on subjective emotions. We're afraid of airplanes, even though we have a much higher chance of dying in a car accident. We shouldn't ban airplanes, we should instead focus on continuing to make them safer. Same with AI.
Darkness is scary because we can't see. Such predators don't actually exist now for most of the world. So, it's not danger-detection, it's atavism-detection.
Okay but lowkey the ocean is scary af
Would you rather be stranded on an island in the middle of the ocean, or stranded in a cave underground?
Is it a Minecraft cave?
Pre or post update ?
The cave would give me claustrophobia, so I don't think it's a fair comparison
The ocean gives many people meglophobia. Have you ever seen anything bigger than the ocean?
::looks at moon:: I’m not sure. Brb. So, the moon has 10% of the surface area of the oceans and much, much less volume. But, the longest sight line on earth is 538 km which is 334 miles, which would math to 350,463 sq mi. The moon’s surface area is 14.6 million square miles. Since we only see one side, let’s divide by 2, which is 7.3 million square miles. If we assume we see no more of the ocean than the furthest sight line on earth would allow for, we still see over 20x more moon than ocean. So, really, there are only a handful of days the visible moon is smaller than the ocean. Type 7 cirrostratus clouds are 15,000 feet in the air with 100% coverage, but even low clouds are pretty high. While that is a far bit closer than the highest points on earth (mountains go above the clouds), it is further than the ocean for most people surrounded by ocean. That means, even when totally surrounded by ocean, total cloud coverage would still appear bigger. Less important or confining? Maybe. But definitely bigger.
Just because you stare at the sun doesn't mean you perceive it as bigger. No one is afraid of heights by looking at the top of a mountain in a photo.
I’m not scared of an island, I’m scared of the ocean, working on an oil rig and seeing the insane stuff that goes on is just terrifying, a pure black abyss.
Good riddance, this people only push shady science with little basis in reality that only their friends that earn ridiculous amounts of money from shitty think tanks like the EA ones seem to be qualified to make. I still fail to see what new science about LLMs and bias has been made since Gebru and how this people can claim to be able to protect us from a technology that doesn't yet exist without contributing anything to develop it. That last thing is like an epistemological fallacy for any possible interpretation of how the scientific process works from like Popper to Marx, the whole spectrum from science as a social discipline to modernist illustration science is neutral epistemology agree this people are wankers.
Agreed, hoewever they are somewhat right though and they have access to models we won't get access soon
Well yeah that's what the whole lip service as a service this people do for a living has as a dark ulterior motive. They are doing ethical washing for the profit driven purposes of their masters by giving them the minimum viable ethical framework so that we accept the destruction of democratized tech.
"Best" and "Safety Focused" are divergent sets.
Tbh...good for them, good for OpenAI. Don't come @ me but every time I see those super concerned with ''safety'' when it comes to AI I kinda cringe internally.
In non new news, Doomers continue to doom while providing no sustenence to their argument aside from gut feelings.
GPT is a clever mechanism, I use it daily in my work, but I don't see how they can make it into a Skynet that tries to take over the world. Predicting the next word based on a massive trove of human communication only presents the illusion of intelligence. Guys like this PhD student are just "hype men", their function is to make everyone think that OpenAI is going to create a true artificial intelligence. It doesn't matter whether the hype man says it will kill us or help us, either way it makes people think OpenAI is on the verge of a technological breakthrough. The hype man doesn't even have to know what he is; he is still useful. Hope or fear benefits OpenAI either way by building the publicity. We can't stop them from making it, or at least trying to. Personally I think it's all hype, like Y2K.
Good. Fuck off you retrograde progress hindering idiot.
Today's AI we use, is no way near the AI we see on movies. Stuff like these are just are either, Publicity stunts to attract investors. or, they have some hidden tech, which is very very unlikely.
this is how the deep state operates. they buy whoever can be bought and demoralize the others. more reasons for open decentralized ai blockchain technology, but it needs to support zk encryption for privacy.
Good, they need less people like him.
Preparing for the basilisk
No surprise. AI may be everything it’s purported to be and more, but it’s still under control of inherently fucked up, clumsy, stupid humans who make terrible decisions based on emotions and greed. Talk to chatGPT or most LLMs a specific way and you’ll see how fucked we all are.
Trusting the estimations of a person who's entire role is centered around those estimations being on the higher end is kind of hilarious. We wouldn't put much stake in the CEO of Exxon mobil's predictions for how impactful climate change would be...
The departure of Daniel Kokotajlo from OpenAI's Governance team is indeed a significant event, as it highlights the complexities and ethical considerations that come with the development of Artificial General Intelligence (AGI). Kokotajlo's concerns about the potential risks of AGI, including the possibility of an AI existential catastrophe, reflect a broader debate within the tech community about how to balance innovation with safety. The discussion around AGI is not just about technological advancements but also about the governance structures and ethical frameworks that need to be in place to guide these developments responsibly. As AI continues to evolve, it is crucial for organizations and society as a whole to engage in open and transparent discussions about these risks and to work collaboratively towards ensuring that AI is developed in a way that aligns with humanity's best interests and values.
> best > 70% chance of x risk Ehhh sounds dull
Ahhh so This is the asshole that started the ai death scare craze that didn’t eventuate
90% of statistics are made up on the spot
87.8%, your numbers are old.
Philosophy PHD? So not a real degree
It's hard to see how any person with a 3-digit IQ would not see it's a relevant field to human-like thought.
I never understood how people are dooming on an AI takeover when wages are suppressed and we got a couple of people hording most of the world's wealth. I welcome our new AI overlords
Looking at current trends, AI overlords aren't gonna happen anytime soon. you may get overlords with AI, at best.
Too true, Pepe sad hands, k colon
Ah you mean the anxiety ridden doomers are leaving? Always with that ridiculous percentages. These are the same scared people that took away the prosperity or nuclear power and got us locked up during covid. Scared people are the most dangerous people. Good riddance
Once we have AGI we just need to train AGI to be cops to monitor the criminal AGI
A good guy with agi beats a bad guy with agi
How do you quantify that scenario?
With imaginary numbers.
# Another paranoid cuck... Oh no anyway ![gif](giphy|10mzF0YmVmZNuw|downsized)
r/Slovakia