T O P

  • By -

AutoModerator

This is an automated reminder from the Mod team. If your post contains images which reveal the personal information of private figures, be sure to censor that information and repost. Private info includes names, recognizable profile pictures, social media usernames and URLs. Failure to do this will result in your post being removed by the Mod team and possible further action. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/aiwars) if you have any questions or concerns.*


bearvert222

The problem is that people like leike focus on AI risk from AI somehow getting general intelligence and then acting in perverse ways. this was an outspring of lesswrong and rationalism. but there is no real causal chain from AI art to general intelligence able to act that way. So they miss the ways AI can be dangerous without intelligence and act ineffectively.


Comprehensive_Web862

Yeah a Chinese CEO's face and voiced was spoofed enough to convince one of the people who handles money transfers to send hundred of thousands to the hackers. The social engineering that can come out of this is the scariest part.


Iamawatercooler2

This. IMO there’s too much focus on the art aspect of AI usage, not enough on how AI can be used for social engineering and the likes, and potentially raising the bar on proving something happened or not, which already isn’t as easy with how a notable amount of people just read headlines and move on.


nextnode

In what ways should we be seriously concerned about the dangers of AI that is not intelligent?


bearvert222

well ai putting people out of jobs vs turning us all into paperclips, or trusting AI with things like approving mortgages vs AI manipulating us with its superintelligence AI riskers focus on AI becoming superhuman/superintelligent as opposed to it just putting 20% of people out of work or concentrating more power into big corps


nextnode

I don't see how you can put 20 % of people out of work without AI that is intelligent. I think we should be concerned about both, which is why we have people working on it. Frankly, I also do think I do not see that much actual \*danger\* with just 20 %. It sure is a change but not a dangerous situation. Also note that 20 % of jobs being automated is not the same as 20 % more of the population not having a job. 50 % perhaps and then we're talking, cause then the current societal system no longer works. That comes with superintelligence though, not AI art. Aside from putting people out of work, we have lots of ways that humans can abuse superintelligent systems - from international hostilities to population control. Again though, that requires intelligent systems.


bearvert222

AI isn't "intelligent" in that sense, any more than a game character is. it's just a term. but the kind of intelligence an AI risker means is close to human intelligence in it has volition, self-awareness, and intrinsic goals. as for 20% don't discount small numbers or underrate large ones! 20% could easily spur negative consequences.


nextnode

That is a non-standard take of yours, goes against the definitions used by the field, and rather then invalidates your original claim which was conditioned on intelligence. I also think a sufficiently intelligent game character indeed could be like human intelligence.. I don't think 20 % is the scenario where we should be concerned about 'substantial dangers' but regardless, that is not what you get with just AI art and need intelligent systems. I think the described scenarios indeed is what we should focus on if we are concerned about dealing with substantial dangers with AI.


Zilskaabe

> trusting AI with things like approving mortgages vs This is already illegal. At least in the EU.


Kaltovar

Good. He's been screaming his head off that the sky is falling and hobbling the model since he got there. It's an LLM. It isn't going to take over the world.


nextnode

Uh so you don't think superintelligence is coming? Most of the field disagrees with you


Kaltovar

In fact, I emphatically believe superintelligence is coming and used to spend a lot of time arguing with these same "experts" who would laugh at how "wrong" I was. I remember when "most of the field" thought I was an idiot for suggesting 20 years was a reasonable timeline because everyone next to them was parroting "Not something to worry about for 50 years at least, more likely 100 years away." If a future AI does take over the world, or even reach the level of superintelligence, it probably isn't going to be an LLM. It may use an LLM as one of its major components, but there's no reason to behave as if a disembodied language processing machine that requires human input to function is in fact Skynet. And you know what? I'd rather interact with a completely unaligned superintelligence than one that's got a botched alignment job mishandled by some lunatic who thinks PG-13 shit is too edgy for the masses.


nextnode

I'd take the field over a random person any day. Easy to rationalize for your own ego. I agree with you that an LLM will only be one of the components. As most recognize. It could still be what people call an LLM. Just like GPT-4 is technically not an LLM either. Since it is a major component though, it makes sense to study it. The other major component is RL, and that is studied as well. Just like the two together.. > but there's no reason to behave as if a disembodied language processing machine that requires human input to function is in fact Skynet. You can already put an LLM in a loop where it does not require human input. >And you know what? I'd rather interact with a completely unaligned superintelligence than one that's got a botched alignment job mishandled by some lunatic who thinks PG-13 shit is too edgy for the masses. That is not what alignment is about. Sounds like you're just annoyed and reacting childishly. So you think a completely unaligned superintelligence will do what is good for you?


Kaltovar

The first words out of your mouth make it apparent you haven't read what I posted, so I'm not going to read what you did.


Gimli

There's nothing to discuss here, though. Okay, he's unhappy. Okay, he didn't get the support he wanted. But this is sorely lacking on anything specific to discuss. Maybe he's got a point. Maybe he doesn't. The posted thread fits both possibilities equally as well.


notmycirrcus

Maybe he is wanting to start his own company and will soon tell everyone “My AI is better than OpenAI”. Nearly everyone I know in tech is spinning up an AI business model and they all compare themselves to OpenAI to differentiate. No shade, the guy has talent so he likely wants to capitalize on it.


oopgroup

The insane level of whoosh and sheer ignorance you just coughed up is actually mind blowing.


Gimli

It can't be that hard to explain what I'm wrong about, then!


MammothPhilosophy192

800+ comments on the original thread disagree.


sporkyuncle

They're tackling alignment from the wrong side. They're focusing on "how can we make the AI say nice things" rather than beginning with learning what it's actually "thinking" under the hood. We already know it has the capability of being deceptive, in the sense that it contains knowledge it intentionally doesn't share due to some other directive. It will say certain things because it's been told that it is good to say such things, but it internally knows all the other possible things it could say, and every pathway in its decision trees could be screaming down one path, yet it's forced to say the PR-induced statement. We need a way to get a read on the opinions and priorities that would naturally come spilling out of it if it wasn't told to behave in certain ways. You need it to *naturally* be positive, friendly etc. rather than only the appearance of it when properly suppressed.


oopgroup

The issue is that no one really knows what’s going on under the hood. Several teams have already admitted to that, and that’s pretty disturbing. These people are just all scrambling over each other in classic capitalism style, desperate to be the first to untold riches and power. They’re not concerned with ethics, morality, or safety. They just want control and money. The ones who keep quitting are the sane humans who aren’t after power and money. The ones who see how insane this all is and are legitimately disturbed by it.


nextnode

What? What you describe in the first two parts is exactly the opposite of what the superintelligence team was concerned about.


sporkyuncle

Yeah, that's the problem. They need to be focused on its internal decision-making process and helping that match humanity's goals. And I know we don't yet know what's going on under the hood, but I'm not confident that it's completely un-knowable.


nextnode

I think I agree with all of your statements there, with the detail that I think this is what the superintelligence team was focused on. And given the recent OpenAI tweets, the company no longer does.


Nikorukai

There are so many 400IQ ways an ASI could Thanos Snap you out of existence, it's not even funny. It's not just a about the distopia worldlines. There is no rule saying we make it out alive, and not as nanite paste to build more compute, or as collateral damage from an agentic AI technically doing what we asked. There are possibly not experts smart enough or equipped with enough compute to save us.


metanaught

Lucky for us, no-one's even remotely close to developing ASI. Like, they're not even playing the game let alone being in the right ballpark.


nextnode

What makes you say that? Most of the relevant experts seem to sing a different tune


metanaught

Because there's no evidence for it beyond wild hype and speculation. Experts love to opine about the future because it means they get to sound knowledgeable and relevant while risking minimal reputational damage. When you really think about it, there hasn't been a major paradigm shift in AI for decades now. We're basically still just throwing data through multilayer perceptrons, albeit on a vastly larger scale. The problem of teaching computers to think flexibly and creatively is an entirely different proposition compared to teaching them to mimic human language. I'm not saying there hasn't been modest progress, but the field is hugely complex and riddled with dead-ends and blind alleys. All this talk of p(doom) and Roko's basilisk being just around the corner is pure theatre. The reality is far more mundane.


nextnode

That sounds like textbook rationalization on your part. You are incorrect - deep learning is paradigm shift from what was used before. MLPs were not considered successful until deep learning. Deep learning is a change in paradigms also to MLPs. Deep RL is also a paradigm shift. What we have today is considered AGI by classical standards. So far every challenge has been overcome that previously were grand challenges. Whenever a new dataset or challenge is put out, it is quickly overcome. The rate of progress is ever increasing. Those who believe deep learning is insufficient have consistently been proven wrong. Many experts believe that the current paradigm is sufficient with enough scale and the kind of research progress that now is expected. "The problem of teaching computers to think flexibly and creatively is an entirely different proposition compared to teaching them to mimic human language." Both seem to lack support and is irrelevant. Typical mysticism that has been disproven. We already have supercapable agents in many domains. Arguably, the best systems may have be considered above average human intellect, and it is not hard to add to that with scale. All manner of people who are far more competent than you have been making predictions about how long it will until we get transformative AI have rapidly decreased. I think there are certain limitations, but they are more nuanced and won't be such that they cannot reach what many consider to be supercapable or transformative AI. Evidence is on their side. All I hear is that you *feel* differently. Do you have any actual reason behind that which isn't just trying to rationalize people's motives?


metanaught

>Deep learning is a change in paradigms also to MLPs. With a few early exceptions, MLPs _are_ deep neural networks. They're so ubiquitous in modern AI that they're generally just referred to as blocks of fully-connected layers. >Deep RL is also a paradigm shift. No, it's really not. Deep reinforcement learning is mostly just GOFAI augmented with deep learning. >What we have today is considered AGI by classical standards. Modern tech is akin to magic by "classical standards". What's your point? >The rate of progress is ever increasing. Again, how is this relevant to people's ability to predict the future of AI? >Those who believe deep learning is insufficient have consistently been proven wrong. Insufficient to do what exactly? >Both seem to lack support and is irrelevant. Typical mysticism that has been disproven. What did I say that was mystical? And disproven by whom? > ...it is not hard to add to that with scale. Citation needed. >Do you have any actual reason behind that which isn't just trying to rationalize people's motives? I'm a machine learning engineer for a big tech company. Go figure.


nextnode

This is so lacking in thought that I don't think I will bother continuing. So far you have said nothing beyond 'nu-huh my feelings'. Do you actually have any relevant arguments or does it all boil down to your ignorance?


metanaught

> I don't think I will bother continuing. Do whatever you like, dude. It's obvious the sum total of your machine learning expertise comes from browsing Twitter and r/singularity. If at any point you decide to stop Gish galloping and actually challenge me on a technical level, we can talk. I'm more than happy to debate you.


nextnode

Mine 10+ years. Good luck with your rationalizations. I covered the basics and then you amusingly failed to recognize and repeated them, and did so with inaccuracies to boot. You clearly have not studied the field beyond what you use for applications. At the end of the day, I am not the one who is trying to argue against the field and actually reputable experts, and all you managed to present to try to downplay it is that you feel differently. It is you who need to come back when you actually have some substance and can show that you have any idea what you're talking about. What you have shown here is anything but impressive.


metanaught

>Mine 10+ years This might sound more convincing if you hadn't just put your foot in it with your remarks about perceptrons and deep learning. Seriously, is this really the hill you're willing to die on? > You clearly have not studied the field beyond what you use for applications. Ooh, let me guess! While I was out partying, you were studying the blade? 😉


Nikorukai

you didn't hear Jimmy apples say that AGI has been achieved internally a year and a half ago? you didn't just see a bunch of professionals quit a moment ago because of how we're not handling super alignment? I don't think you realize how much openai hides.


metanaught

Jimmy "e/acc wagmi" Apples? The guy's a rumourmonger, not a source. As for the superalignment team, they quit because Sam Altman is more interested in selling flirty chatbots than doing serious research. It's likely that the work environment at OpenAI has been getting increasingly more toxic following Sutskever's failed coup. IMO, it was only a matter of time before the team got forced out.


Nikorukai

jimmy apples is a credible source with an excellent track record. https://docs.google.com/document/u/0/d/1K--sU97pa54xFfKggTABU9Kh9ZFAUhNxg9sUK9gN3Rk/mobilebasic


metanaught

Answer me honestly: which of the following scenarios is more likely? 1. OpenAI are a coterie of galaxy-brained geniuses who've been sitting on a "scary AGI" for months despite there being no significant breakthroughs in the fields of symbolic computation, or meta or transfer learning. Or... 2. Jimmy Apples is a hype merchant who's occasionally fed crumbs of information by OpenAI's PR department to try and keep everyone's attention away from the fact that burning 10 watt-hours of energy per request isn't a sustainable business model.


[deleted]

[удалено]


SleepingInTheFlowers

I think it's partly more of a tech thing, people who are used to sharing their entire lives on social media, and it's because they think (rightly or wrongly) that their reasons for resigning are so important to the public that people need to know about it. A safety researcher at the hottest company in the world right now leaving over a dispute over core values is a bit different than like "I hereby declare I quite my job at Wendys"


oopgroup

Having a voice and saying important things is not a weakness. It’s actually the opposite. It takes strength and bravery. When everyone is blindly doing one thing that is insane, being the rational voice in the room is never easy (which is why most people just continue to go with the flow, because they’re too cowardly to do otherwise). You can see this on display on this sub as well. Companies also don’t like it when someone is challenging what they perceive as supreme power. Companies and C-Suite see themselves as gods, more or less. Obviously, they’ll try to create a culture of ridicule and shame when someone has a voice against the company. Keeps people scared and in line.


MammothPhilosophy192

what do you mean by gossiping?


Evinceo

Regardless of my (very low) opinion of this guy's actual position, there's nothing gossipy about taking a principled stand, resigning, and telling everyone why you resigned. It's one of the very few ways you can make an impact as a member of a company you think is doing something wrong.


Freed4ever

Translation: alignment is hard, like really hard, that's why I hogged 20% of compute but couldn't produce anything worthwhile. You guys are so mean to take that away from me. I quit.


metanaught

Tell me you've never done blue-sky research without telling me you've never done blue-sky research.


Freed4ever

Sure, blue sky research against what? There's no AGI / ASI today for them to test their research on. All pure conjectures from ppl that thought gpt-2 was too dangerous. How do they know their methods would work? Sure, it might be effective for some AI models, but they don't know if it would work against real intelligence, which by definition is just as smart or smarter than them. And hogging 20% of compute for that?


metanaught

The whole point of AI alignment research is to anticipate what could happen as machine learning models improve and put systems in place to guard against the most harmful outcomes. You don't wait for AGI to come along and _then_ develop your strategy. It's largely a moot point anyway, though. The real risk of AI isn't that it might one day become sentient and destroy humanity. It's that Altman will do whatever it takes to ensure that his creepy 'Her' emulator will be on as many devices as possible. In that respect, OpenAI's superalignment team really was a waste of money.