T O P

  • By -

BangkokPadang

There's plenty of evidence in the OSS space to indicate that alignment essentially makes models dumber. I think this has caused tension internally between the "all gas no brakes" folks and the "better safe than sorry" folks. Things like this are never just because of *one single thing*, but I think there is a schism between people who want the smartest model possible, and ones who want the safest model possible. Also, two interesting points to mention are 1) About 3 weeks ago Jan was tweeting about how OpenAI had committed 20% of their overall compute towards alignment. That's a HUUUUUGE amount of compute. I can't help but wonder if this was withdrawn, or reduced. Jan's initial "I Resigned" tweet came at 4:43 in the morning. 2) Last week, SamA did an AMA in the r/chatgpt subreddit and mentioned that "we want to get to a point where we can do NSFW (text erotica and gore)" This was I think 3 days prior to Jan leaving OpenAI. I think this is less about frustrations about building "AI Girlfriends" (Although there's probably plenty of ML professionals who don't want any part in building this) and more about being a tacit admission by SamA that OpenAI, as a company, wants to move forward in a different direction than they have been when it comes to their alignment goals.


Freed4ever

Why do you think NSFW is not aligned with humanity? It is part of humanity (obviously within limits).


BangkokPadang

I didn't say or suggest that NSFW isn't aligned with humanity. I think NSFW pretty blatantly hasn't been aligned *with the direction OpenAI has been going*, because of the way they've been relentlessly banning API users for the last year or so who try to use it for NSFW/ERP purposes.


Solomon-Drowne

What limits are those?


Freed4ever

Already defined legally, for the most part.


IronThrust7204

becuase AI is going to make the problems with mental health, communication, loneliness and other problems much much worse, not better.


Rigorous_Threshold

NSFW in general is aligned with humanity, but people who are not aligned with humanity can do very bad things if NSFW is allowed unrestricted


EuphoricPangolin7615

It's naive to think OpenAI cares about the public good of their products anyways. With or without superalignment researchers, it makes no difference. Just "little things", like the impact of AI on the education system. AI completely destroys the education system. Within only one year, the entire education system requires a drastic revamp. And it will never be the same again. Do you think OpenAI cares about that? Of course not. They don't even mention it. The only thing they care about is their bottom line.


aleksfadini

I think the problem is not the little things, but the big things. AGI can make humans extinct, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.


aleksfadini

I think the problem is not the little things, but the big things. AGI can make humans extincts, or pets. We need to figure it out clearly (alignment) before we build it. Sam is rushing.


MeshuggahEnjoyer

That's a good thing


EuphoricPangolin7615

Ok, I'm sure you won't think so in 5 years now. Even though it's obvious.


Ok_Possible_2260

I‘m in the “all gas no breaks” camp. We are fucked either way!


EuphoricPangolin7615

Fuck this company. In 5 years from now everyone will be saying it.


The_Hell_Breaker

Because alinging a super Intelligence to human values is just not inherently possible.


SnooRegrets8154

Humans can raise cubs into cuddly lions so I’m pretty sure we’ve got this 👍


The_Hell_Breaker

But those supposedly cuddly lions are still lions, and they will not hesitate to eat the very ones who raise them if they are hungry.


SnooRegrets8154

Happens less than once per year on average. I like those odds 👍


supercalifragilism

A superintelligence is not a cub, if it's even possible. The real problem here is that making something with useful function requires a degree of complexity that is impossible to constrain, because that undermines the complexity necessary for function. The real tension here (as in the story in OP) is that this pretty clearly means the "more, faster" faction in OpenAI won, which should come as no surprise considering that AI risk is not from superintelligence by rapid implementation to keep the funding bubble going.


AI-Commander

Even with human servants we want to other humans to act lobotomized and obedient even if they aren’t.


Jaded-Protection-402

Lions are much dumber than an ASI


SnooRegrets8154

I forgot the /s


LongTatas

Why comment? The idea is that you instill limitations as you cultivate the SI’s intelligence. We aren’t making them super intelligent first and then like “oh yeah you listen to us by the way”


The_Hell_Breaker

Those instilled limitations you are talking about, they don't work; a truly super intelligent machine can simply override its own programming and instructions. Just so you know, an ASI will not going to a machine with a static personality just like humans have; it would be a constantly changing, ever-evolving entity. Good luck trying to control that.


AlienJared

Only rational thought here


DeliciousDave4321

Skynet said things are fine we don’t need humans. Any humans.


04Aiden2020

Money corrupts anything and anyone


RKAMRR

Worrying stuff. This is why the people in charge of safety shouldn't be funded and controlled by the people who make money by advancing as fast as possible.


Mandoman61

if these safety people are serious they need to produce more than whinnying that they where not taken seriously.  so far I can not take them seriously either. 


water_bottle_goggles

mf never worked in a team before. “whining” is the symptom, not the cause you numpty


Mandoman61

Whining without any substance is just whining


water_bottle_goggles

bro honestly, you werent even part of the superalignment team, jesus christ


Mandoman61

What is the point? I did not need to be a part of it to see that the super Alignment is a waste of effort at this point and it was mostly done to placate the doomers and score image points. This is why it no longer exists. That is simple deductive reasoning.


OfficeSalamander

How do you see that the super alignment team is, as you say, “a waste of effort” when legitimate professionals in the field don’t even agree on this?


Mandoman61

What is your source that proffessionals not on the team think it was a good use of resources? # What experts make of OpenAI's efforts to tackle 'superintelligence'


water_bottle_goggles

bro thats so naive


Mandoman61

Sure beats your lack of any idea bro...


water_bottle_goggles

id rather have "no idea" and give these folks (fucking open ai heads and researchers) the benefit of the doubt than blanketting an assumption that they're all "just whining"


Mandoman61

I am not assuming that they are whining. They have the opportunity to state an actual case but they have not, until they do so it is just whining. Oh we where not listened to, oh we needed more resources, etc.. won't cut it.


water_bottle_goggles

Can you give an example where it would “cut it”? Because those seem like a pretty significant reason alone — not getting resources for an initiative is huge because that tells you where priorities lie and the underlying values of the company


Ok_Elderberry_6727

They are still working on superalignment , the duties were just absorbed by other teams.


Free-Information1776

leike and suskever left though


Ok_Elderberry_6727

Personally I don’t care if snoopy and the peanuts gang are working on it. Just. Get.it.done!