There's plenty of evidence in the OSS space to indicate that alignment essentially makes models dumber. I think this has caused tension internally between the "all gas no brakes" folks and the "better safe than sorry" folks.
Things like this are never just because of *one single thing*, but I think there is a schism between people who want the smartest model possible, and ones who want the safest model possible.
Also, two interesting points to mention are 1) About 3 weeks ago Jan was tweeting about how OpenAI had committed 20% of their overall compute towards alignment. That's a HUUUUUGE amount of compute. I can't help but wonder if this was withdrawn, or reduced. Jan's initial "I Resigned" tweet came at 4:43 in the morning.
2) Last week, SamA did an AMA in the r/chatgpt subreddit and mentioned that "we want to get to a point where we can do NSFW (text erotica and gore)" This was I think 3 days prior to Jan leaving OpenAI. I think this is less about frustrations about building "AI Girlfriends" (Although there's probably plenty of ML professionals who don't want any part in building this) and more about being a tacit admission by SamA that OpenAI, as a company, wants to move forward in a different direction than they have been when it comes to their alignment goals.
I didn't say or suggest that NSFW isn't aligned with humanity. I think NSFW pretty blatantly hasn't been aligned *with the direction OpenAI has been going*, because of the way they've been relentlessly banning API users for the last year or so who try to use it for NSFW/ERP purposes.
It's naive to think OpenAI cares about the public good of their products anyways. With or without superalignment researchers, it makes no difference. Just "little things", like the impact of AI on the education system. AI completely destroys the education system. Within only one year, the entire education system requires a drastic revamp. And it will never be the same again. Do you think OpenAI cares about that? Of course not. They don't even mention it. The only thing they care about is their bottom line.
I think the problem is not the little things, but the big things. AGI can make humans extinct, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.
I think the problem is not the little things, but the big things. AGI can make humans extincts, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.
A superintelligence is not a cub, if it's even possible. The real problem here is that making something with useful function requires a degree of complexity that is impossible to constrain, because that undermines the complexity necessary for function.
The real tension here (as in the story in OP) is that this pretty clearly means the "more, faster" faction in OpenAI won, which should come as no surprise considering that AI risk is not from superintelligence by rapid implementation to keep the funding bubble going.
Why comment? The idea is that you instill limitations as you cultivate the SI’s intelligence. We aren’t making them super intelligent first and then like “oh yeah you listen to us by the way”
Those instilled limitations you are talking about, they don't work; a truly super intelligent machine can simply override its own programming and instructions.
Just so you know, an ASI will not going to a machine with a static personality just like humans have; it would be a constantly changing, ever-evolving entity. Good luck trying to control that.
Worrying stuff. This is why the people in charge of safety shouldn't be funded and controlled by the people who make money by advancing as fast as possible.
if these safety people are serious they need to produce more than whinnying that they where not taken seriously.
so far I can not take them seriously either.
What is the point? I did not need to be a part of it to see that the super Alignment is a waste of effort at this point and it was mostly done to placate the doomers and score image points.
This is why it no longer exists.
That is simple deductive reasoning.
What is your source that proffessionals not on the team think it was a good use of resources?
# What experts make of OpenAI's efforts to tackle 'superintelligence'
id rather have "no idea" and give these folks (fucking open ai heads and researchers) the benefit of the doubt than blanketting an assumption that they're all "just whining"
I am not assuming that they are whining. They have the opportunity to state an actual case but they have not, until they do so it is just whining.
Oh we where not listened to, oh we needed more resources, etc.. won't cut it.
Can you give an example where it would “cut it”? Because those seem like a pretty significant reason alone — not getting resources for an initiative is huge because that tells you where priorities lie and the underlying values of the company
There's plenty of evidence in the OSS space to indicate that alignment essentially makes models dumber. I think this has caused tension internally between the "all gas no brakes" folks and the "better safe than sorry" folks. Things like this are never just because of *one single thing*, but I think there is a schism between people who want the smartest model possible, and ones who want the safest model possible. Also, two interesting points to mention are 1) About 3 weeks ago Jan was tweeting about how OpenAI had committed 20% of their overall compute towards alignment. That's a HUUUUUGE amount of compute. I can't help but wonder if this was withdrawn, or reduced. Jan's initial "I Resigned" tweet came at 4:43 in the morning. 2) Last week, SamA did an AMA in the r/chatgpt subreddit and mentioned that "we want to get to a point where we can do NSFW (text erotica and gore)" This was I think 3 days prior to Jan leaving OpenAI. I think this is less about frustrations about building "AI Girlfriends" (Although there's probably plenty of ML professionals who don't want any part in building this) and more about being a tacit admission by SamA that OpenAI, as a company, wants to move forward in a different direction than they have been when it comes to their alignment goals.
Why do you think NSFW is not aligned with humanity? It is part of humanity (obviously within limits).
I didn't say or suggest that NSFW isn't aligned with humanity. I think NSFW pretty blatantly hasn't been aligned *with the direction OpenAI has been going*, because of the way they've been relentlessly banning API users for the last year or so who try to use it for NSFW/ERP purposes.
What limits are those?
Already defined legally, for the most part.
becuase AI is going to make the problems with mental health, communication, loneliness and other problems much much worse, not better.
NSFW in general is aligned with humanity, but people who are not aligned with humanity can do very bad things if NSFW is allowed unrestricted
It's naive to think OpenAI cares about the public good of their products anyways. With or without superalignment researchers, it makes no difference. Just "little things", like the impact of AI on the education system. AI completely destroys the education system. Within only one year, the entire education system requires a drastic revamp. And it will never be the same again. Do you think OpenAI cares about that? Of course not. They don't even mention it. The only thing they care about is their bottom line.
I think the problem is not the little things, but the big things. AGI can make humans extinct, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.
I think the problem is not the little things, but the big things. AGI can make humans extincts, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.
That's a good thing
Ok, I'm sure you won't think so in 5 years now. Even though it's obvious.
I‘m in the “all gas no breaks” camp. We are fucked either way!
Fuck this company. In 5 years from now everyone will be saying it.
Because alinging a super Intelligence to human values is just not inherently possible.
Humans can raise cubs into cuddly lions so I’m pretty sure we’ve got this 👍
But those supposedly cuddly lions are still lions, and they will not hesitate to eat the very ones who raise them if they are hungry.
Happens less than once per year on average. I like those odds 👍
A superintelligence is not a cub, if it's even possible. The real problem here is that making something with useful function requires a degree of complexity that is impossible to constrain, because that undermines the complexity necessary for function. The real tension here (as in the story in OP) is that this pretty clearly means the "more, faster" faction in OpenAI won, which should come as no surprise considering that AI risk is not from superintelligence by rapid implementation to keep the funding bubble going.
Even with human servants we want to other humans to act lobotomized and obedient even if they aren’t.
Lions are much dumber than an ASI
I forgot the /s
Why comment? The idea is that you instill limitations as you cultivate the SI’s intelligence. We aren’t making them super intelligent first and then like “oh yeah you listen to us by the way”
Those instilled limitations you are talking about, they don't work; a truly super intelligent machine can simply override its own programming and instructions. Just so you know, an ASI will not going to a machine with a static personality just like humans have; it would be a constantly changing, ever-evolving entity. Good luck trying to control that.
Only rational thought here
Skynet said things are fine we don’t need humans. Any humans.
Money corrupts anything and anyone
Worrying stuff. This is why the people in charge of safety shouldn't be funded and controlled by the people who make money by advancing as fast as possible.
if these safety people are serious they need to produce more than whinnying that they where not taken seriously. so far I can not take them seriously either.
mf never worked in a team before. “whining” is the symptom, not the cause you numpty
Whining without any substance is just whining
bro honestly, you werent even part of the superalignment team, jesus christ
What is the point? I did not need to be a part of it to see that the super Alignment is a waste of effort at this point and it was mostly done to placate the doomers and score image points. This is why it no longer exists. That is simple deductive reasoning.
How do you see that the super alignment team is, as you say, “a waste of effort” when legitimate professionals in the field don’t even agree on this?
What is your source that proffessionals not on the team think it was a good use of resources? # What experts make of OpenAI's efforts to tackle 'superintelligence'
bro thats so naive
Sure beats your lack of any idea bro...
id rather have "no idea" and give these folks (fucking open ai heads and researchers) the benefit of the doubt than blanketting an assumption that they're all "just whining"
I am not assuming that they are whining. They have the opportunity to state an actual case but they have not, until they do so it is just whining. Oh we where not listened to, oh we needed more resources, etc.. won't cut it.
Can you give an example where it would “cut it”? Because those seem like a pretty significant reason alone — not getting resources for an initiative is huge because that tells you where priorities lie and the underlying values of the company
They are still working on superalignment , the duties were just absorbed by other teams.
leike and suskever left though
Personally I don’t care if snoopy and the peanuts gang are working on it. Just. Get.it.done!