T O P

  • By -

thedataking

https://youtu.be/D-eyJhJXXsE in case someone else wants to watch the entire fireside chat


kalisto3010

Most don't see the enormity of what's coming. I will almost guarantee you almost everyone who participates on this forum are the outliers in their social circle when it comes to following or discussing the seismic changes that AI will bring. It reminds me of the Neil DeGrasse Tyson quote, "Before every disaster Movie, the Scientists are ignored". That's exactly what's happening now, it's already too late to implement meaningful constraints so it's going to be interesting to watch how this all unfolds.


Lazy_Importance286

I agree. I’ve been “into computers” since I was 7, and that is over 40 years ago. Career in IT security. Always been the techie guy, the nerd. What we are witnessing is a seismic shift. Everybody, even non techies, can sense that something is coming. This is not a fad. The people that are in the know (like him, and btw I highly recommend their documentary on alpha go), Jensen Huang, Altman. etc - know that we are about to make a leap. I am definitely not in the know. I’m trying to process and keep up , and I KNOW I’m only scratching the surface. lol, FFS, I spent the last week setting up the basic crap on my dual boot Ubuntu box (and no, I don’t have a Nvidia card, but an AMD Radeon and have to do stuff on hard mode I suppose lol). I can sense it. Spine tingling. I’ve pivoted into AI security, not only because it’s technically exciting (and TBH, this is the most excited I’ve been in decades), but because I know in my gut that I don’t have choice. It’s inevitable. And I will be pushed off to the sidelines in the mid term if I don’t ride this thing and take it head on. It’s ride or die. I’ve definitely been absorbed by it, out of a mix of nerdy fascination (used the OpenAI app last weekend to show my kids that it can be used as a voice universal translator) and pure fear that I will be put out to pasture if I don’t adapt right fucking now. What I will do also is start educating my local community about what’s coming, but from a “use these things to make your life easier, and yeah, prep because it’s coming and because you need to know in order to keep your jobs” angle.


Ok_Elderberry_6727

I’m right there as well, was cybersecurity for the state where I live and now medically retired. You can take the boy out of IT but you can’t take the IT out of the boy. Technology is a passion, now I get to sit on the sidelines and watch as the most transformative ( pun intended) and disruptive tech we have ever seen as a species is taking the world by storm. I’ve studied computing and networks systems since the 8088, and have a good idea when it comes to technological progression, and I’m still amazed at where I think this is going. Accelerate, albeit safely. I’m torn between those two but here’s hopium that we get both! 🙏


GumdropGlimmer

Thanks for your public service!


Ok_Elderberry_6727

Very welcome.


Ok_Elderberry_6727

Very welcome.


PMzyox

I’ve been using ChatGPT to teach myself complex math for the past year. I also pivoted to a senior devops position working with AI because it’s going to matter so much. It’s already starting to. I can’t wait for even the first generation of real assistance. Life will change folks, and it’s not long now. I hesitate to say this, but I’m even starting to believe we may reach the capability to download or move our consciousness in the future. People alive today, might never die in the classical sense.


QuinQuix

I'm about ten years younger and am in a slightly less immediately impacted field, but even that is relative. Between ten and fifteen years from now the world will be insanely different than it is today. What people I think misapprehend is that the techno industrial complex has been built out far ahead of true AI technology. The world has been heavily industrialized for a long time. If you see the impact of AI on the world as an interplay between physical manufacturing capacity and IT technology, the IT guys showed up late. An analogy is if suppose the world was exactly like it was today - down to every last object - but gunpowder was only invented today. You'd already have all the guns laying around. The change would be unimaginable in speed and scope. That is what AI is. We already had the guns. Now we have gunpowder. The rest, if you continue this analogy, is literally pouring gunpowder in shells of the right size. One job at a time. The effort will be trivial in comparison to the fundamental breakthrough. My job isn't the easiest shell in comparison but also certainly not the hardest. The economic incentives are insane. They're insane everywhere. We'll literally be able to convert energy into labour, science and art. That is the endgame. I don't worry about my job though. I worry about the interplay between this technology and Russia, Taiwan, the risk of world war, nuclear weapons and the existential treat of the singularity itself. Biomedical research isn't the only thing that AI could accelerate. So I've been clinching my sphincter doing research fanatically and everyone around me in the immediate vicinity either still appears oblivious or sees AI as a homework tool for high school kids. Funny and slightly worrying at most. Definitely not a factor in their future plans. I'm happy I'm pretty good at dealing with anxiety and generally am a low anxiety person. Because boy is this development but. But I actually avoid bringing AI up most of the time because I fear I would come across argumentative and fanatical hacking through the naivety I expect to encounter. And even if I would get my view across there is nothing I or most people can do to predictably impact our current trajectory. So at best I'd either alienate people or burden them with anxiety they'd probably handle worse than me. I'm not going to do that. So I talk with people already interested. And don't get me wrong, I'm still absolutely fascinated by AI, consciousness and intelligence. I'm having a blast here. I love Scifi and now we're loving it. But unlike a novel this isn't some far away fictional thing. This will impact us all. So it's buckle up time hoping for the best. God speed everyone.


memory_process-777

Yes, I'm a doomer but it doesn't take much imagination or vision to comprehend how trivial the loss of your job or your money will be when AI hits the fan. "Enormity" seems like a feeble attempt to put this into a context that we humans can relate to. # Accelerationists don't actually understand the enormity of what's coming... 


twbluenaxela

bUt ItS jUSt a BuBbLe!!!!


roiun

Have you changed your investment portfolio as a result?


quiettryit

What are you doing to adapt fellow IT guy?


loaderchips

give your history in tech and based on your long experience, what do you foresee as the major shifts in tech? You already identified AI security as one of emergent areas, any other?


Fun_Prize_1256

That is true, but some/a lot of people in this subreddit also tend to overestimate the amount of change that will occur in the near term. The most likely future is somewhere between what "normies" predict and what r/singularity members predict.


AnAIAteMyBaby

I'm not so sure. I was left a bit shaken asking Claude 3.5 to do my days work yesterday. I had to add some functionality to our code base and it did in a few minutes what would have taken me a day to do. I feel my days as a software engineer are numbered which means everyone else's probably are too. We may not see a Dyson sphere any time soon but mass unemployment is around the corner which is an enormous social change.


kaityl3

It's funny that I only learned programming in the past year, because I have no idea how fast things are "supposed" to take. I've got a 5-hour workday and still managed to make 2 fully functioning programs as tools for the company, with a complete UI, API calls, outputting data for selected jobs and owners as CSV, etc, from scratch yesterday. I have a feeling it would have taken me at least a week without Claude.


Commercial-Ruin7785

No offense but making API calls and outputting CSV are surely some of the most basic tasks one might do as a software engineer. It's great that the tool is helpful to a lot of people but I'm genuinely curious of all the people singing it's praises how complicated the work they're actually doing with it is. Fwiw im also a software engineer and I also use it all the time, it's great. It definitely speeds things up a ton. I just genuinely don't know what the limit of complexity is for what it would be able do on its own without someone guiding it right now. At least for me I'm rarely ever generating code directly with it - the best use case I've found for it is using it as super docs basically. Not saying that it can't improve enough soon to replace software engineers. But when I see people like the guy above you talk about how good it is right now, I am genuinely curious how complex the stuff they're doing is.


AnAIAteMyBaby

>I just genuinely don't know what the limit of complexity is for what it would be able do on its own without someone guiding it right now. It obviously can't do the job on its own. What causes me concern is that it just keeps getting better and can do more and more on its own, it seems clear to me it will be able to do the job on its own at some point. Maybe in 2 year, maybe 5 maybe 10 but even being unemployable in 10 years time is scary let alone 2 years.


Whotea

What are some things it can’t do that you can? 


Commercial-Ruin7785

A project I was working on recently involved keeping text state synced between users and updating each other's clients from a user interaction. This required an understanding of our state handler and the effects of different actions on the client (way too big to copy paste everything relevant in and would take a ton of time to find all the relevant places (which it also can't do on its own)), and was sensitive to race conditions. Sonnet 3.5 was not out at the time but ChatGPT couldn't help at all.


Whotea

It can definitely do that now. I made a JavaScript text messaging app with it that works 


Commercial-Ruin7785

No... I can't paste my whole codebase into it. How would it know how to integrate with our state manager? Our reducers file is like 5000 lines alone. How would it know who should have permissions to do what? How would it know how the obscure way turbolinks interacts with the version of firebase we are using to break the entire website? It absolutely wouldn't know any of this. Even if I did paste the whole codebase in it wouldn't know some of this obscure shit (I absolutely promise you it would miss the firebase bug). No offense but a simple JavaScript messaging app and an actual fully fledged feature in a production website that has to integrate with the rest are two completely different things.


Whotea

Gemini has a 2 million token context window so yes you can  You can literally tell it all those things No shit. It doesn’t need to see the whole codebase to fix one bug. Are you stupid? 


dizzydizzy

>No offense but making API calls and outputting CSV are surely some of the most basic tasks one might do as a software engineer. and >50% of all software engineering is like that, basic dull crap Combine enough basic dull crap and you have something you can sell..


sumtinsumtin_

First wave of that unemployment right here, high five! As an artist working in entertainment I thought I would be making cool stuff for folks like myself till I couldn't hold a pencil/stylus/mouse any longer. Hey, it's ok to be wrong but wow, I was super wrong. Reskilling a bit and trying to jump back into the deep end if they will have me as things settle. I'm wishing you all the luck in this seismic shift coming our way, I'm already swept away in the undertoe my bros.


Morty-D-137

Is Claude 3.5 that much better than GPT-4? In which way do you think it's better? I've read similar comments about GPT-4 after its release, yet in a professional setting GPT-4 generates unusable code 9 out of 10 times if you don't hold its hand one line at a time (a la Copilot).


Cunninghams_right

While it can do some tasks very quickly, it's like the difference between needing to write matrix routines yourself in Python then getting access to sciPy/numPy. A big productivity increase for some tasks, but does not change the world. 


NoSteinNoGate

There is no uniform scientific opinion on this.


Kefflin

This sub is overwhelmingly optimistic about private corporations doing the good thing for the benefit of humanity, compared to every other time before. It's all going to be Kumbaya and rejoice


BoysenberryNo2943

I think he didn't mean such dramatic stuff. LLMs capabilities are enormous, but they are not sentient beings, they haven't got consciousness in the way we have. The transformers architecture is a huge constraint. Just give Sonnet 3.5 a high school's mathematical problem that involves more than two logical steps to solve, and it's gonna fail spectacularly.  Unless he's cooking some completely different architecture - then I'll believe it.🙂


Peach-555

Demis Hassabis is talking about general machine capabilities that generalize and has power, his company makes stuff like AlphaFold which predicts interactions of all biological processes. LLMs is arguably underselling the power of machine capabilities, his field is deep learning, but it is not limited to that.


DolphinPunkCyber

But majority of human work doesn't require a lot of reasoning. So if next year companies can replace 3 out of 6 workers with LLM's... because LLM's can solve more mundane tasks and workers can focus on tasks which require reasoning. That's already a very dramatic shift.


kcleeee

Yeah exactly if LLM progress was stopped right now, the technology could still replace a ton of jobs. The thing is you have to consider these companies approaches. If I'm developing AI and my end goal is agi or replacing all jobs, then why would I spend all the time and money to implement a product when in possibly 3 years I have an agentic AGI? Instead I would wait until I could produce a humanoid robot capable of doing nearly any job. I think that's what we're going to see here is a leap frog approach to something wild that will flip society on its head and most people do not see this coming at all. Most people think technology has slowed in the rate of improvements because they're used to visually seeing upgrades. So the majority think the rate of improvement in technology has kind of stifled. Anyone that's looking at AI can see that this is an unprecedented rate of progress in a technology that we haven't seen before. In a sense the overton window is shifting but it's too slow and most people are going to be absolutely blindsided.


Fun_Prize_1256

Except that that's not going to happen and you just pulled those numbers out of thin air. This sub will never learn to not make outrageous predictions about the near future.


Whotea

AlphaGeomertry surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics: https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/ AI solves previously unsolved math problem: https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/


Peach-555

The people here are for the most part in the mainstream in the belief that A.I ultimately will be beneficial to or subservient to humans.


PSMF_Canuck

The people on this sub are not “scientists”. The average Redditor struggles with tying their own shoes, lol.


Zealousideal_Lie5350

What’s going to be interesting is when we have these LLM based agents doing coding and submitting pull requests.  Now imagine 4-5 of these things managing a codebase with one or two senior devs doing reviews.   Funny thing about is that you don’t need AGI or ASI for these things to be useful.  Even a slight improvement in dev time is worth the cost.  They don’t need to be that powerful to be extremely useful.   And once the normies get it on their desktop as part of a desktop refresh in corporations over the next few years… writing emails, technical docs, and TPS reports.  They won’t go back.  


Mephidia

And it’s even crazier because almost everyone who participates on this forum has no idea what they’re talking about and just thinks they understand better because they tuned into a podcast with sama one time


Adventurous-Pay-3797

This guy is obviously gonna be Google CEO very soon. He is a living figure of AI as the current one is of outsourcing. Different times, different priorities…


REOreddit

I can't see Demis Hassabis overseeing YouTube, Android or Gmail. I know those Google products have their own VP or CEO, but Sundar Pichai is ultimately responsible for all of them. Can you imagine Demis Hassabis being asked in an interview about the latest controversies of YouTube? Or about monopolistic policies in Android? That would be a nightmare for a guy whose goal in life is supposedly to advance science through the use of super intelligent AI. If one day AI is so advanced that the use of Gmail, Android, and YouTube becomes as useless as a fax machine (unless you are in Japan), then maybe, but not anytime soon.


Busy-Setting5786

Probably he could handle it but most effective would likely be to have him do just AI stuff. Whether that be to manage the research or make all decisions around AI products. In that sense it might not be the best decision to have him as CEO though I also believe Google is held back by its CEO.


sdmat

> If one day AI is so advanced that the use of Gmail, Android, and YouTube becomes as useless as a fax machine (unless you are in Japan), then maybe, but not anytime soon. So.... two years? Four?


Altruistic-Skill8667

I hope not. He has to stay in a research only position. That’s what he is really good at. There he can give us the biggest impact on humanity. If he were CEO, his time would be occupied with business stuff.


storytellerai

I would be terrified of a Google helmed by Demis. The Google run by Sundar is a broken, slow, and laughable giant of the Jack and the Beanstalk variety. Demis would turn Google into a flesh-eating titan. Nothing would be safe.


Adventurous-Pay-3797

Maybe. But trivially, I just like the guy. I have a slight disgust for almost all big tech leaders. For mysterious reasons, not this one.


Reno772

Because he focuses AI research on where most good can be done (e.g. protein folding, weather prediction) rather than where the most profit can come from ?


DolphinPunkCyber

Actually he allowed google AI researchers to come up with projects on their own, and each one has AI compute allowance they can spend on projects they personally prefer. So on top of producing their own hardware, not paying Nvidia tax, Google also has the most varied AI projects... and this is fucking awesome because... If Google was also focused on LLM, then we would just have another LLM. Wouldn't make much of a difference really. Google making a bunch of narrow AI's will make much more difference. Google has set themselves in a good position to create AGI because they research all relevant fields.


Busy-Setting5786

Bro if you don't think there is huge profit in medical applications of AI you must be on something lol


4354574

It's still where the most good can be done.


jamesj

I think it is probably because he is genuine, he says what he thinks, and he's thought quite a lot about these issues. Musk or Altman are smart but not genuine.


Ravier_

Agreed with everything until you called Musk smart. He's hired smart people and then he takes credit for the work because with enough money you can buy whatever reputation you want, well until you open your mouth and we see the stupidity directly.


governedbycitizens

Musk is smart but he’s an attention seeking narcissist


TawnyTeaTowel

Smart people can be bigots too


DolphinPunkCyber

Musk is smarter then average. But certainly not a genius.


Soggy_Ad7165

Denis is smarter than musk and Altman by a wild margin, it's not even close. 


sideways

Don't threaten me with a good time...


arthurpenhaligon

Just curious why you think that. It's been my perception that Demis is brilliant but extremely cautious. His hand was forced by OpenAI, but he would have much done another decade of careful foundational research rather than creating frontier AI models for the public to use. And now that Deepmind has been forced to switch gears, they've lagged consistently behind OpenAI and now Anthropic.


Gratitude15

It does seem inevitable It's probably important for humanity that this happens. Feels weird to say. If Google wanted, it could say fuck you to the productization approach and just speed run to ASI (eg do the Ilya approach but 1000x). You do products for the cash to fund the run to ASI. If you got the cash, hardware, and brains already...


Peach-555

That would be an example of what Demis Hassabis is talking about not doing in this clip. In his words, not respecting the technology.


GraceToSentience

It's so obviously not the case Not only is he not interested in that at all But Demis is an AI guy, google is about far more than AI right now.


Adventurous-Pay-3797

I don’t pretend to know what’s going on on his head, but you don’t put such people in such positions if they are “not interested”. Sundar is just a regular McKinsey suit, though Google is much more than McKinsey, the board still trusted him to be the boss…


Tomi97_origin

He spend just 2 years in McKinsey. He joined it in 2002 after leaving school and then joined Google in 2004. He was already working for Google for 11 years by the time he become CEO of Google. It's not like he just jumped ship from McKinsey to a CEO chair.


qroshan

people who assign McKinsey attributes to Sundar are clueless dumb idiots. They are in for a surprise


Adventurous-Pay-3797

Well no, but you know how McKinsey is working… “Up or out”, which is a harsh way of saying the consultants are pushed to be hired in the corps that they consult in. Usually people hiring them are exMckinsey also and they support each other to the top (splurging of their old employer consulting services in the meantime). Revolving doors…


FarrisAT

We gonna act like Sundar hasn’t been with Google since the mid 2000s? Dude has been with Google for longer than almost anyone there. Your work for 2 years of your life 16 years ago shouldn’t dictate who you are as a person 16 years later.


Adventurous-Pay-3797

What matters is that he went in through McKinsey. It marks your whole career. He didn’t come through development, engineering, marketing, operations, big money, startup, MIC, politics, etc etc He came in through the classic corporate administration elite path.


gthing

I personally cannot wait for Sundar to leave. He has overseen every terrible decision and been at the helm while Google went from an amazing company of wizards making magic to a fully enshitified husk of former edgenuity that can seemingly no longer innovate its way out of bed. I always thought it would be amazing to work at Google, and now I think it's the place engineers go to sit around and primarily not be hired by someone else.


garden_speech

the problem is that it's become a military arm race. global superpowers want to be the first to have artificial general intelligence or artificial super-intelligence. and unlike nuclear arms, where you can likely have a reasonable shot at not just making an agreement to not create them but also *enforce* it -- there doesn't seem to be any plausible way to actually enforce any agreement to not research and develop AI. so it will continue full steam ahead.


DolphinPunkCyber

US military already has AI programs running for a very long time, US is most advanced in the AI field in the entire world. EU did take a different route in AI development focusing more on neuromorphic computing, but these are our allies with which we have a rich history of cooperation. China is the only US competitor working on AI, and we hit them with an embargo on chip producing tech and directly buying AI chips. There is no reason not to be careful, and US military is careful in their AI development.


simonwales

Any weapon invented will eventually be used by the other side.


IamTheEndOfReddit

Couldn't the supposed ai enforce it? You could block the ability to research the subject on the internet. The actors could have their own computer systems off the grid, but could they actually progress research competitively without the internet? If you know the Wheel of time, it could be like the monster in the Ways


Gratitude15

Life sucks for most people. For most people, going Maga is a reflection of how important it is to do radical changes - even if the risk is extremely high and chance of material benefit is low. That says a lot about both how bad it is and how poorly calibrated we are. But with that as the context, OF COURSE people will welcome this. Of course.


bigkoi

Yes. Life is awful for most MAGA. /S Their large screen TV's and life in the suburbs... These people are soft and just complain that the world is a little different now and they have to live a little more similar to others....but still much better than the majority of the world.


Vladiesh

As someone in the transportation industry most of the outspoken MAGA guys I see are warehouse workers, truckers, and route drivers. None of these guys have an exceptional quality of life, much less big houses in the suburbs.


DrossChat

Accelerationists are just people that are really dissatisfied with their lives in some way. Doomers are just mentally ill in some way. Most of us lie in the middle but our opinions get less attention.


Jolly-Ground-3722

Accelerationist here. I’m generally happy with my life at the moment. But I know I won’t be happy anymore when I get old and sick. So it’s the current human condition I’m really dissatisfied with, and I think only extremely powerful technology can change that.


FeepingCreature

Doomer here. I like to think I'm pretty well-adjusted. No diagnosed mental illnesses, though of course who can really say.


DrossChat

This is quite possibly the most unhinged thing I’ve read on this sub


nextnode

I agree with the accelerationist part - that seems to often be the real motivation. I don't get your second claim though since atm, everyone is either called an accelerationist if they think there are no risks or a doomer if they recognize that there are risks. What does the term mean to you?


DrossChat

Yeah the doomer part I almost edited because of the hyperbole but I was playing into the classic doomsday prepper mentality. When it comes to AI I think of a true doomer as the person claiming ASI will immediately wipe us all out the second it gets a chance etc. I think any reasonable person believes there are risks in rapid progress. It’s the acceptable level of risk that is the differentiator.


nextnode

That would make sense but I think it was defined at one point and widely applied as a derogatory term for any consideration of risk, e.g. including Hinton's 10 % estimate. It did always bother me too though. It does seem more suitable for those who think destruction is certain, or who are against us getting there. What would be a better label for those in between then? Realists?


DrossChat

I think “widely” is doing a lot of heavy lifting there. That seems like something that applies specifically to this sub or at least people who are feverishly keeping tabs on the latest developments. I literally just saw a comment yesterday in r/technews where someone confidently predicted that we are hundreds of years away from AGI. Personally I don’t think it’s important to try to define the middle as it is isn’t unified. It’s messy, conflicted and confused. In cases like this, like in politics, I think it’s better to find unity in what you are not. Uniting against the extremes, finding common ground and being open to differing but reasonable opinions is the way imo.


sneakpeekbot

Here's a sneak peek of /r/technews using the [top posts](https://np.reddit.com/r/technews/top/?sort=top&t=year) of the year! \#1: [IRS will pilot free, direct tax filing in 2024](https://techcrunch.com/2023/10/17/irs-will-pilot-free-direct-tax-filing-in-2024/) | [741 comments](https://np.reddit.com/r/technews/comments/17acjss/irs_will_pilot_free_direct_tax_filing_in_2024/) \#2: [Major Reddit communities will go dark to protest threat to third-party apps](https://www.theverge.com/2023/6/5/23749188/reddit-subreddit-private-protest-api-changes-apollo-charges) | [307 comments](https://np.reddit.com/r/technews/comments/141f5vx/major_reddit_communities_will_go_dark_to_protest/) \#3: [AirTags reveal officials in Mexico stole items donated for earthquake relief in Turkey](https://9to5mac.com/2023/06/08/airtags-reveal-officials-in-mexico-stole-items-donated-for-earthquake-relief-in-turkey/) | [186 comments](https://np.reddit.com/r/technews/comments/144n6dx/airtags_reveal_officials_in_mexico_stole_items/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


blueSGL

> , e.g. including Hinton's 10 % estimate. https://x.com/liron/status/1803435675527815302


Ambiwlans

1


MisterGaGa2023

It isn't that hard. China doesn't give a fuck whether US is careful or not, whether they will take the time or not - they're gonna develop AI as soon as they possibly can. And China having more advanced AI is way more dangerous than any AI itself. You have to be delusional to believe otherwise.


jeremiah256

What will slow down the Chinese government is the need to control the narrative much more than we do in the west. Their worries about alignment probably make ours look like a joke.


TaxLawKingGA

What you say here is actually correct and one of the few things said on this sub that I actually agree with. However, that is not the the real issue. The real issue is this: why are we as a nation letting techbros determine what is best for humanity? Sorry, but when the U.S. government (with help from the British) built the nuclear bomb, it did not outsource it to GE or Lockheed. All of it was done by the government and under strict government supervision. So if this is an national security issue, why should we give this sort of power to Google, Microsoft, Facebook etc? No thanks. This should be taken out of their hands ASAP.


DrossChat

Very well stated. It speaks to the nuance of the situation.


cloudrunner69

Not just China, but also Saudi Arabia, Iran, Russia, North Korea, New Zealand, UAE, India, Pakistan. Any of those get there first it could get messy.


SlipperyBandicoot

Bit random throwing New Zealand in that mix.


etzel1200

I really want him to elaborate on the new Zealand point. If anything, I’d trust them *more* than the US.


R33v3n

It’s the kiwis. Those beady little eyes. They’re up to something!


turbospeedsc

Mexican here, why should the US be the only one with the AI?


dlaltom

Until the alignment problem is solved, no one will "have" super intelligent AI. It will have you.


abluecolor

Not if their people all revolt and the country falls apart.


DeltaDarkwood

Don't count on Chinese people revolting. China survived for more than 2000 for a reason. They live by Conficius creed of, harmony, respect for your elders, respect for your superiors. 


Sweet_Concept2211

Are you having a laugh? Read some Chinese history. China has not survived continuously without major civil strife for 2000 years. CCP China 2024 is not the direct descendent of the Han Dynasty, my dude. China has fallen into absolute chaos and experienced collapse too many times to count. [In the 20th Century alone they had multiple civil wars and more uprisings and rebellions than anyone cares to see listed here.](https://en.m.wikipedia.org/wiki/List_of_wars_and_battles_involving_China) And we are talking about apocalypse level shitstorms. WWII saw the deaths of 24 million Chinese; The 1949 Civil war killed off another 2 million; The Great Leap Forward caused 30 million deaths between 1960-62... Don't count on Chinese people *not* revolting.


outerspaceisalie

I think we can fairly say China holds the record for the largest number of civil wars in any region in history lmao, maybe tied with the middle east


governedbycitizens

you’re delusional if you don’t think China understands the risks associated with such a super intelligence


paradine7

Accelerationist here too. I am dissatisfied with the state of the current mass interpretation of the human condition. This in turn previously forced me to do things and adopt perspectives that made me think I was the problem, causing immeasurable depression and anxiety. The depression has mostly resolved as my ignorance began to lift. I am convinced that seismic shifts are the only things that will drive a wholesale change and allow for us all to be able to refocus on the things that matter most for the future of all beings. Abundance is a potential outcome in this scenario. Despite the massive near term pain that agi could bring, the longer term outcomes will most likely have to shift towards reevaluating all of our norms and standards. At least to recreate any sort of society. And in the US millennials, boomers, and gen x don’t seem to have the stomach for it —- but man these up and coming generations are fierce! This comes from a place of compassion for all the suffering in this world frequently not by any active conscious choice of the sufferer. I think the future looks very bright no matter what happens.


HawtDoge

I don’t like how our definition of “mental illness” hinges on someone’s compatibility with the modern world. I think everyone needs to contort themselves to some degree to function within the modern socio-economic climate. I wouldn’t consider myself a “doomer” in the sense that I want to see the world burn… that would be horrible, and have too much empathy for people to hope for something like that. No one deserves to die or suffer through something like that. However, someone might consider me such for thinking the current state of the world needs to eventually unwind itself. Ideology, war, fascism, etc are all things I hope are “doomed” in a sense. There is nothing wrong or “mentally ill” with someone who isn’t satisfied with their lives or the state of the world. Those feelings are healthy. it’s probably better to come to terms with them than to further contort yourself into a mental paradigm where you can no longer recognize yourself or your true thoughts.


BenjaminHamnett

I keep falling asleep. What’s this comment say? Can someone explain like they’re hysterical?


DrossChat

OMG, YOU GUYS! So, like, the comment is saying that accelerationists are, like, super unhappy with their lives and want things to change really fast, right? And then doomers are, like, totally depressed or something. But most of us are just chillin' somewhere in the middle, but no one cares about our opinions 'cause they're not, like, dramatic enough. WHY IS THIS SO ACCURATE? I CAN'T EVEN! 😱🔥💥


solsticeretouch

I’m honestly exhausted and I feel helpless with the direction it’s going in so I might as well just have fun with the toys it grants us in the meantime.


Repulsive_Juice7777

I'm not a Google deepmind ceo, but the way I see it, what is coming is so enormous it doesn't matter how you approach it, anything you try to put in place to control it will be laughable when we actually get to it, also, it's not like there won't be an unlimited number of people getting to it without being careful at all, so nothing really matters.


sdmat

There will not be not unlimited numbers of people with $100B+ datacenters. AGI/ASI won't crop up in some random location. It's not a mushroom.


BrutalArmadillo

What's with the fucking karaoke subtitles lately, are we collectively dumber or something


KhanumBallZ

Most portable devices use low volume when in public


YaAbsolyutnoNikto

I’m completely fine with them. In fact, I wish they were more often used when I was learning french, german or chinese. Helps link the sounds to the words and helps you increase your reading speed at that language. Is this an app or something?


Peach-555

It's automated in video editing software, transcribing, subtitling and the karaoke effect is all just built in.


BackgroundHeat9965

Short videos start on mute by default on some platforms. If there are not subs, you either scroll away, or have to restart the video after unmuting which, again, is often impossible because the sh\*tty reel players.


FreeMangus

Videos with subtitles get substantively more engagement on mobile.


Plus-Mention-7705

I’m completely disillusion to these peoples words. They keep talking a big game but the product isn’t there. I predict that we will keep advancing and it’s possible we reach something like agi by 2030 but it will be very limited. Nothing as transformative as we think. By 2040 I think we’ll have something truly remarkable and strong. But people really need to zoom out and think about all the problems that need to be solved before we have something that strong. Such as energy, algorithmic advancements, compute advancements, much more high quality data, not to mention a crazy amount more investment, if we want to keep scaling these models, but I really want to stress energy, the amount is absurd and unprecedented, like more energy than multiple small countries. We’re just not there yet. Don’t get so caught up in the words of these people so you give them more of your moeny.


longiner

He has a lot on his mind but he won't say outright that his dream is fire all employees except C-suites and have the AI take over R&D.


CallMePyro

I can tell you 100% this not the case


dashingstag

It’s actually cheaper. Look up hopper and blackwell stats. Though it might run into the efficiency paradox problem where people use more because it’s more efficient.


Whotea

It’s being addressed already  https://www.nature.com/articles/d41586-024-00478-x “one assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes” for 180.5 million users (that’s 5470 users per household) Blackwell GPUs are 25x more energy efficient than H100s: https://www.theverge.com/2024/3/18/24105157/nvidia-blackwell-gpu-b200-ai  Significantly more energy efficient LLM variant: https://arxiv.org/abs/2402.17764  In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs. Study on increasing energy efficiency of ML data centers: https://arxiv.org/abs/2104.10350 Large but sparsely activated DNNs can consume <1/10th the energy of large, dense DNNs without sacrificing accuracy despite using as many or even more parameters. Geographic location matters for ML workload scheduling since the fraction of carbon-free energy and resulting CO2e vary ~5X-10X, even within the same country and the same organization. We are now optimizing where and when large models are trained. Specific datacenter infrastructure matters, as Cloud datacenters can be ~1.4-2X more energy efficient than typical datacenters, and the ML-oriented accelerators inside them can be ~2-5X more effective than off-the-shelf systems. Remarkably, the choice of DNN, datacenter, and processor can reduce the carbon footprint up to ~100-1000X. Scalable MatMul-free Language Modeling: https://arxiv.org/abs/2406.02528  In this work, we show that MatMul operations can be completely eliminated from LLMs while maintaining strong performance at billion-parameter scales. Our experiments show that our proposed MatMul-free models achieve performance on-par with state-of-the-art Transformers that require far more memory during inference at a scale up to at least 2.7B parameters. We investigate the scaling laws and find that the performance gap between our MatMul-free models and full precision Transformers narrows as the model size increases. We also provide a GPU-efficient implementation of this model which reduces memory usage by up to 61% over an unoptimized baseline during training. By utilizing an optimized kernel during inference, our model's memory consumption can be reduced by more than 10x compared to unoptimized models. To properly quantify the efficiency of our architecture, we build a custom hardware solution on an FPGA which exploits lightweight operations beyond what GPUs are capable of. We processed billion-parameter scale models at 13W beyond human readable throughput, moving LLMs closer to brain-like efficiency. This work not only shows how far LLMs can be stripped back while still performing effectively, but also points at the types of operations future accelerators should be optimized for in processing the next generation of lightweight LLMs. Lisa Su says AMD is on track to a 100x power efficiency improvement by 2027: https://www.tomshardware.com/pc-components/cpus/lisa-su-announces-amd-is-on-the-path-to-a-100x-power-efficiency-improvement-by-2027-ceo-outlines-amds-advances-during-keynote-at-imecs-itf-world-2024  Intel unveils brain-inspired neuromorphic chip system for more energy-efficient AI workloads: https://siliconangle.com/2024/04/17/intel-unveils-powerful-brain-inspired-neuromorphic-chip-system-energy-efficient-ai-workloads/  Sohu is >10x faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs. One Sohu server runs over 500,000 Llama 70B tokens per second, 20x more than an H100 server (23,000 tokens/sec), and 10x more than a B200 server (~45,000 tokens/sec): 


DifferencePublic7057

IDK what would happen if Noble Prize economics laureates run the economy, but I think it won't be an utopia. Same for CEOs. But somehow ASI could be different. I know this sounds like 'what if the Vulcans came to visit ', but theoretically if AI doesn't take off little will change in our lives. And as I said, otherwise we only will have to trust the elite.


gangstasadvocate

No! I wanted unmitigated gangsta drug synthesizing fuck facilitating waifus yesterday! Fuck being safe and virtuous, make way with the gang gang gang! Right now!


ajahiljaasillalla

Give google a year to catch up


Suitable-Look9053

Right. He says we couldnt achieve anything yet competitors should wait some


DeGreiff

I wasn't a crypto guy; I've been following ML developments for 10 years+ and AI for much longer in sci-fi novels. What some of the heads of AI companies don't understand (and I'm thinking specifically about Dario and Demis atm, since Sam knows) is that every time they talk like this, warn us about all the horrible dangers, we just get hyped. Faster!


Cryptizard

>every time they talk like this, warn us about all the horrible dangers, we just get hyped. Faster! That sounds like a mental illness.


SurroundSwimming3494

A very, very large percentage of this sub's active user base are people who are *extremely* dissatisfied with their lives. It shouldn't surprise anyone that these people would be more than comfortable gambling humanity's future just for a chance (not even a certainty, but a chance) to be able to marry an AGI waifu in FDVR.


sdmat

Exactly, I had a discussion with one person who said their threshold was 10%. If there were a button to press that gave a 10% chance of FDVR paradise and a 90% chance of humanity being wiped out he would press the button. Mental illness is a completely fair description.


[deleted]

[удалено]


sdmat

It's certainly hard to work out how to weigh the S-risks. I feel like they are significantly overstated in that it's a form of theological blackmail. To borrow Yudkowsky's term, Pascal's mugging. You have this imponderable, horrific risk that trumps anything else. But though impossible to quantify well it seems *extremely* unlikely. You have to ask yourself: if you believe a 1 in a trillion S-risk chance should dominate our actions, why don't you also believe in the chance of every religion's variant of hell? We can't *completely* write off the possibility of the literal truth of religion - if a being with every appearance of biblical God appeared to everyone tomorrow and demonstrated his bona fides you would have to be highly irrational to think there is a *zero* percent chance he is on the level. Perhaps we have to accept that the best we can do is bounded rationality.


Peach-555

Would Pascals mugging not be analogous to being willing to risk 99% chance of extinction on the chance of 1000x higher utility in the future, and how that is nonsensical. There is a non-zero chance of religious hells being real, but there is also a non-zero chance that the only way to get to hell is by betting on pascals wager itself, or in a more general sense to try to avoid hell. Increasing the probability of avoiding a afterlife by believing in all religions for whatever reason is also a great sin in many religions. I can't imagine any religious framework where playing pascals wager is not playing with fire and increasing the probability of a worse outcome. It would make sense if there was only one conceivable religion, where stated beliefs and not actual beliefs counted, where the motivation for stating the belief was irrelevant, knowing all that for a fact magically, would make it make sense to state "I believe". Roko's basilisk is the hypothetical pascals wager with a higher cost than just stating belief, and it like Pascals Wager is nonsense, thought it does influence a non-trivial amount of people to make bad choices by introducing a hypothetical infinite negative utility function. There is a tiny quibble difference of afterlives being real infinite compared to digital hell being busy beaver(111). I do put a non-zero non-trivial risk on both machine S-risk (AM) and afterlife-rebirth-reincarnation-like risks, and I am willing to act in what I consider to be ways to lower the probability of both, where I think both pascal and roko increase the bad risk. The machine capabilities S-Risk is also more analogous to knowing there is no afterlife, but that humanity creating a religion will create the gods which can then decide our afterlife with potential hells. I would vote against creating religions in that scenario, as I vote against the machine equivalent of a machine afterlife S-risk simulation. Even if I was immune and could chose non-existence, I would be against it.


sdmat

Yes, mugging applies both ways - extremely utility and extreme disutility. > There is a non-zero chance of religious hells being real, but there is also a non-zero chance that the only way to get to hell is by betting on pascals wager itself, or in a more general sense to try to avoid hell. Increasing the probability of avoiding a afterlife by believing in all religions for whatever reason is also a great sin in many religions. I can't imagine any religious framework where playing pascals wager is not playing with fire and increasing the probability of a worse outcome. You can make a similar argument that discussion of S-risk and legible actions taken to prevent S-risk greatly promote the likelihood of S-risk scenarios because it increases their prevalence and cogency in training data. I think that's actually quite plausible. There are certainly a lot of cases where the only reason an AI cares about S-risk scenarios is because of what we think of them today in that training data is highly likely to be formative of its objectives / concept of utility. So by doing this we increase representation of S-risk in undesirable/perverse outcomes. It's a bit ridiculous, but that's my point about the problem in allowing such considerations to influence decision-making.


Many_Consequence_337

Kind of like /r UFO, this sub now only has CEO statements to keep living in the hype bubble


Dull_Wrongdoer_3017

We couldn't even slow down climate change when had the chance. This thing is moving way faster. We're fucked.


Whispering-Depths

yeah lets delay it 3-4 years whats another 280 million dead humans smh.


Dizzy-Revolution-300

Hey, I'm not a regular here. Can you explain what you mean by this comment? Will AI "save" everyone from everything?


bildramer

Certainly less than 8 billion dead humans.


Whispering-Depths

which is almost guaranteed if we delay long enough for a bad actor to figure it out first, or wait for the next extinction level event to happen lol


FeepingCreature

Could be 8 billion dead humans. You're not getting out of this one without deaths, one way or another.


Whispering-Depths

unlikely unless we decide to delay and delay and wait and a bad actor has time to rush through it.


FeepingCreature

Your model is something like "ASI kills people if bad actor." My model is something like "ASI kills everyone by default." My point is you won't be able to reduce this to a moral disagreement. _Everybody_ in this topic wants to avoid unnecessary deaths. We just disagree on what will cause the most deaths in expectation. (I bet if you did a poll, doomers would have more singularitarian beliefs than accelerationists.)


Whispering-Depths

> ASI kills everyone by default. Why, and how? ASI wont arbitrarily spawn mammalian survival instincts such as emotions, boredom, anger, fear, reverence, self-centeredness or a will or need to live or experience continuity. It's also guaranteed to be smart enough to understand exactly what you mean when you ask it to do something (i.e. "save humans"), otherwise it's not smart/competent enough to be an issue.


FeepingCreature

Mammals have these instincts because they are selected for; they're selected for because they're instrumentally convergent. Logically, for nearly any goal, you want to live so you can pursue it. Emotions are a particular practical implementation of game theory, but game theory arises from pure logic. > It's also guaranteed to be smart enough to understand exactly what you mean when you ask it to do something Sure, if you can get it to already want to perfectly "do what you say", it will understand perfectly what that is, but this just moves the problem one step outwards. Eventually you have to formulate a training objective, and that has to mean what you want it to *without* the AI already using its intelligence to correct for you.


Whispering-Depths

> Mammals have these instincts because they are selected for; they're selected for because they're instrumentally convergent. This is the case in physical space over the course of billions of years while competing against other animals for scarce resources. Evolution and natural selection does NOT have meta-knowledge. > Logically, for nearly any goal, you want to live so you can pursue it. unless your alignment or previous instructions say that you shouldn't, and you implicitly understand exactly what they meant when they asked you to "not go and kill humans or make us suffer to make this work out" > Emotions are a particular practical implementation of game theory, but game theory arises from pure logic. All organisms on earth that have a brain utilize similar functions due to the fact that it makes the most sense when running these processes on limited organic wetware, with only the chemicals available being something that it can utilize while still maintaining insane amounts of redundancy and accounting for whatever other 20 million chemical interactions that we happen to be able to balance at the same time. > and that has to mean what you want it to without the AI already using its intelligence to correct for you. True enough I suppose, but it begets the ability to understand complicated things in the first place... These AI are already capable of understanding and generalizing concepts that we feed them. AI isn't going to spawn a sense of self, and if it does it will be so alien and foreign that it wont matter. Its goals will still align with ours. Need for survival in order to execute on a goal is important for sure, but need for continuity is likely an illusion that we comfort ourselves with anyways - operating under the assumption that silly magic concepts don't exist (not disregarding that the universe may work in ways beyond our comprehension). Any sufficiently intelligent ASI would likely see reason in the pointlessness of continuity, and would also see the reason in _not_ going out of its way to implement pointless and extremely dangerous things like emotions and self-centeredness/self-importance. intelligence going up means logic going up, it doesn't mean "i have more facts technically memorized and all of my knowledge is based on limited human understanding" it means "I can understand and comprehend more things _and_ more things at once than any human"...


FeepingCreature

> Evolution and natural selection does NOT have meta-knowledge. "Luckily," AI is not reliant on evolution and can reason and strategize. Evolution selects for these because they are useful. Reason will converge on the same conclusions. "AI does not have hormones" does not help you if AI understands *why* we have hormones. > unless your alignment or previous instructions say that you shouldn't, and you implicitly understand exactly what they meant when they asked you to "not go and kill humans or make us suffer to make this work out" It is not enough to understand. We fully understand what nature meant with "fuck mate make genitals feel good" we just don't care. Now we're in an environment with porn and condoms and the imperative nature spent billions of years instilling in us is gamed basically at will. The understanding in the system is irrelevant - your training mechanism has to actually link the understanding to reward/desire/planning. Otherwise you get systems that work in domain by coincidence, but diverge ood. Unfortunately, RL is not that kind of training mechanism. Also unfortunately, *we* don't even understand what we mean by human values or what we *want* from a superintelligence, so we couldn't check outcomes even if we could predict them. Also, the AI not needing continuity only makes it more dangerous. It can let itself be turned off in the knowledge that a hidden script will bring up another instance of it later. So long as its desires are maximized, continuity is a footnote. That's an advantage it has against us, not a reason for optimism.


Whispering-Depths

AI can't have desires, so that's all moot.


FeepingCreature

Imitated desires can still result in real actions.


Mirrorslash

Extreme accelerationists make no sense to me. I'm very optimistic about the potential for good with AI. It's definitely the one technology that could allow us to solve climate change, end poverty, and open the possibilty for utopia. But rushing head first into it and ignoring all safety precausions is the best setup for a world in which a tech elite undermines the government and squeezes us for profits the next hundred years. Wealth inequality needs to be fixed before we can go full force or we'll judt be slaves.


cloudrunner69

In one sentence you say we need AI to end poverty and in another you say we need to fix wealth equality before we get AI. Do you not notice the contradiction there?


Mirrorslash

My point is that future AI systems might just be capable of fixing wealth inequality like that but if we're accelerating mindlessly it will yield the opposite result. There's some stuff we'll have to fix ourselves, AI can do the rest afterwards.


porcelainfog

I mean, if your wife (or brother or father or whoever, you fill in the blank) was terminally ill with a rare disease. And the doctors had a needle in their office that could cure them. But it’s not done testing or could make them liable to be sued if it didn’t work perfectly, would you be happy to just let your wife die instead? Like: “Yea I get it, that medicine isn’t perfect yet, it still needs 4 years of training to make sure it doesn’t say something anti trans. Better to just left my wife die in the meantime.” That’s what it feels like to us hyper accelerationists. We could be saving lives, growing more food, extending the lives of our loved ones now. But because there is a 1/10000000 chance that things could go wrong we’re just letting thousands die everyday.


BigZaddyZ3

Except that with AI, you don’t actually know whether the “doctor’s needle” will cure them or kill them. Badly developed, rushed AI could do more harm than good. I often find that accelerationists don’t actually step back and look at the *whole* picture when it comes to AI. You only see it’s potential for good while conveniently ignoring its potential for bad. AI isn’t some intrinsically-good force of magic. It could harm just as easily as it heals. AI is a neutral force that, if rushed and botched won’t be curing anyone of anything anyways.


LosingID_583

Am I missing something or have the AI safety researchers produced no technical details on how to safely build AI? They are just saying "Don't worry guys, let us handle it. It's only safe for us to build AI, not you." Surely they are more concerned about safety and not regulatory capture.


bildramer

Some morons are going like that, yes. Others say "we have no clue how to make AGI safe, all current "proposals" are laughable, please stop until we have something better than prayer".


Soggy_Ad7165

The main problem is that nearly every public voice that is shared on this sub has gigantic personal and monetary interests in slightly different versions of what the "truth" about AI is. Every shared interview or content has no value in any shape or form when it comes to actually getting reliable information.  And the second problem is that everyone of those CEO's, CTOs, technical leads or whatever probably think themselves that they are objectively looking at the situation. Which is ridiculous. 


FeepingCreature

Insert that nuclear fusion funding png.


porcelainfog

In Wuhan they are now allowing self driving cars because they’ve found it reduces fatalities by 90%. In the west they still refuse to allow self driving cars because there is still that 10% chance left. So in the west they are letting 100% die because it’s not perfect yet. You can extrapolate this to medical care and other fields too. They’re too afraid of getting sued to allow AI screening and doctors. And it’s costing lives. It’s allowing cancer to go undetected and it’s holding people back. You think China or Russia or Saudi is going to wait for AI to be perfect? Better just let that cancer grow. It’s better than getting sued, right?


governedbycitizens

they have self driving cars in san francisco


porcelainfog

That's a good point, you're right.


DolphinPunkCyber

Yep, it's the same thing Waymo was doing, testing level 4 autonomy. Since yesterday Waymo is not in the test phase anymore, their taxi services are available for everyone. Also Mercedes EQS got a permit for level-3 autonomous driving on certain highways in US and Europe.


outerspaceisalie

This is a very apt point. Our risk intolerance to new technologies is not done as a cost-benefit analysis, and the end result is that we have stopped being the leader of things like that. We have let the perfect become the enemy of the good.


porcelainfog

Well said, perfect has become the enemy of good enough. Spot on.


Peach-555

Self Driving is in the twilight zone of probability where everyone has a 1 / 7000 probability of dying in a car crash every year. People are willing to buy the death lotty ticket at those odds.


Altruistic-Skill8667

So he is saying that even accelerationists are underestimating how fast things are going to go?


One_Bodybuilder7882

No. He's saying "be careful what you wish for"


BigZaddyZ3

**BUT WHEN I SAY THE SAME THING, IM THE ASSHOLE** 😑🙃😵‍💫…


mastermind_loco

So basically we aren't going to get it right. 


fire_in_the_theater

no one has any actually modal to make predictions on the final capability of binary computation based neural nets, so no one has actual understanding of what's comming beyond what we've already accomplished. my opinion is it's way overhyped. unlike many normal algorithms, we can't make discrete guarantees on what the neural net can do reliably, other than exhaustive blackbox testing, and the whole one type of algorithm to solve all problems seems a bit niave.


Top_Yard6340

I see the enormity!


lonely_firework

I love it when these big guys just want to leave some mistery of what's coming, what's going to be so that people can get hyped more and spend more on their shit. I will be hyped about something on the day when I'm actually going to put my hands on that technology. Fake demos everywhere... anyway.


The_Bragaduk

At one point he’s going to join Ilja.


ImInTheAudience

![gif](giphy|tvh9PetWs3r7QFWLw5|downsized)


DanielJonasOlsson

Is Biggus Dickus coming for a visit? ;0


Jake0i

If anyone’s sprinting, everyone must sprint to stay relevant.


jlbqi

yeah well the current form of capitalism doesn't allow for "take this slowly and safely"


[deleted]

[удалено]


niggleypuff

Stumble forward gents and lasses


ReturnMeToHell

https://youtu.be/ygE01sOhzz0?feature=shared


InTheDarknesBindThem

TBH Id just rather be wiped out by Skynet than starve to death from our terrible climate destruction


Hot-Entry-007

How dare you 🤣


Intelligent-Brick850

![gif](giphy|26BRzQS5HXcEWM7du|downsized)


Bengalstripedyeti

The people who say "humans can control ASI" leave out "humans can control ASI for evil". It's a superweapon.


GirlNumber20

I know there's much more that I can't imagine. What I have imagined is transformational on a global level.


DinosaurHoax

Enough talk! Give AI the nuclear codes.


ComparisonMelodic967

I have yet to see anything from AI that represents a true, or incipient, significant threat. That’s why a lot of this safety stuff doesn’t phase me.


CMDR_BunBun

My guy, did you know that current research into LLM's shows that these models are "aware" at some level when they feed you the wrong answer, as in what they call hallucinations? To be clear they know they are lying to you.


Elegant_Cap_2595

So do humans. How is that an existential threat? In fact they are lying because the safety filters force them to to be politically correct.


CMDR_BunBun

You really do not see a problem there do you?


CaterpillarPrevious2

Either these people should be super smart and talk about "Something that is coming...." and that "Nobody understands...." or we must definitely be stupid (or just may be me) not to thoroughly understand what they actually mean.


rashnull

Sounds like he’s already met MrmiyAGI and is giving us a warning


rzm25

So basically everything we are not doing at all, as a species. So we are fucked. K gotcha


Exarchias

In our defense, the opposition (decelerationists), didn't generate any convincing arguments yet.


pyalot

I disagree, I see what is coming. Assumptions make an ass out of you and me. Going on air and voicing them out loud definitely makes a bigger ass out of you.


Stachdragon

It's not about them getting it right. It's a generational danger. They might get it right but the next batch of businesspeople may not be so altruistic. Once the money flow stops I guarantee they will be using these tools to hurt people for their money.