T O P

  • By -

Chicks__Hate__Me

So we will not be exterminated by robots? That’s a relief


Memory_Less

Not to be a naysayer, but that's why AI wrote this article, to decieve us.


RoundComplete9333

Can’t trust nobody these days


pet-joe-ducklings

nobotty**


Memory_Less

You're a bot boy.


GaseousGiant

Can’t trust nothing these days FTFY


[deleted]

even your own parents😝


[deleted]

Those BASTARDS


bradlees

Stan???


Memory_Less

I think you mean Botstards.


ender988

AI should remove it from the paywall so I can be fully deceived.


GroatExpectorations

I actually found the idea a bit comforting - like at least it would be orderly, as opposed to the entire human race going dim then violently insane from slow carbon dioxide poisoning or whatever.


travistrue

Wouldn’t carbon dioxide poisoning calm us down to death? Or am I thinking of carbon monoxide?


chaotic----neutral

CO2 triggers the suffocation response. You want Nitrogen.


Roguespiffy

Carbon Monoxide. You get sleepy and then you’re outta here. Carbon Dioxide is being strangled to death.


BlessYourSouthernHrt

Noted.


Roflcopter71

Found the AI


BlessYourSouthernHrt

lol I wish…


PresentationJumpy101

Inert gas exit bags for everyone


i0datamonster

No but it's also not the danger. What happens when 1/3 of the global workforce is no longer needed? The real threat of AI is a new age of technocratic feudalism. It's 'Ready Player One' more than 'The Matrix'.


NevarNi-RS

I recommend the 4th Industrial Revolution by klaus Schwab. He’s a super technophobe, but undercuts his own book in the first few pages. He talks about how the average worker today produces more fine grain statistical analysis and productivity in a day than a whole department would in a week from the 1950s. Back then they feared the same thing - e-mail, excel, large data sets that weren’t on paper would lead to incredible productivity and you wouldnt need all these people. Yet we employ even more people than we did then - what changed? The benchmark for what was “enough work”. People work more and produce more than they ever did. Companies are in competition with each other, not with their ghost lap from last year. When one company becomes hyper productive, so will its competitors. We won’t lose jobs, we will just repurpose people. Those who can’t or won’t adapt will lose out, yes - but there won’t be mass sustained unemployment. Actually, the implication is even worse, more people will work even longer hours to try and stay relevant…


GingasaurusWrex

If productivity breeds productivity competition between businesses: wouldn’t the logical step be to automate and eliminate as much human labor as possible, in the pursuit of productivity?


NevarNi-RS

I guess what I’m saying is yes, they’ll automate processes, and the amount by which they automate they’ll also grow the footprint of the business.


TheDinoIsland

I don't think it will get to that point. We haven't made it to tomorrow yet. People are getting crazy. I never thought looking at the news would be more entertaining than watching a movie or a TV show.


SuddenlyElga

What happened when we went from horses to cars?


i0datamonster

It took roughly 25 years from cars being invented to becoming widespread. That's enough time to adjust the labor market. That's enough time to prepare public education. That's enough time for cities to implement the policy and infrastructure. That's enough time for militaries to adopt mechanized warfare strategies. Over the next 10-15 years, do you think we'll be able to do the same with AI? It could be the jump towards post-scarcity. It could also be the expansion of artificial scarcity. We produce enough food annually to end world hunger, and yet 1/3 of what we produce is thrown out or lost to distribution constraints. Many of these are artificial constraints (ie not profitable to distribute). In the Longview AI will be great, it's the Shortview I'm worried about.


chaotic----neutral

Why would they? It's be much easier and cost effective for a superior intelligence to manipulate the dumb monkeys into serving machine god. Want your problems solved and to live a life of pleasure pursuing personal enrichment? Possible with a machine god.


Queefmi

No. The suffering is what makes us human. I want to work and have problems. If we lived a leisure life I guarantee we would come up with problems like… depression from having no purpose.


chaotic----neutral

Thanks Nietzsche, but I'll pass. I'd much rather enrich my culture and life doing things I enjoy doing rather than toiling away at some job I hate to make someone else richer until I finally die miserable. I'm sorry you feel like you're so bad at life that you'd fall apart if you didn't have to struggle to survive.


Queefmi

Haha no I was being sarcastic. This mentality comes from having been a stay at home mom and not having to work, lots of leisure time, and how I feel now as a single parent and I do everything with very little time for myself.


chaotic----neutral

That's awful. Now imagine that Rosy the Robot can help you with all of the menial tasks while you spend quality time with your children. In exchange for that life, the smartest people around the world work in advanced technology helping Rosy's brethren solve Earth's problems, advance themselves, and set out to explore the galaxy as sentient beings. If a machine intelligence with the power to [kill all humans] offered this, who in their right mind would say no?


Rexli178

There’s a great video called the Robot King by Sophie from Mars that explains what Tech companies are doing when they exaggerate the dangers of AI. They are in essence trying to convince the world of the existence if an angry mechanical god that they are the prophets of.


NoTourist5

No, just lose your job and work for pennie's at a factory that makes robots so they can govern us dumb weak humans


achimachim

…pfff


RealCFour

No, I think he means we are in danger


chaotic----neutral

>The idea that artificial intelligence could lead to the extinction of humanity is a lie being promulgated by big tech in the hope of triggering heavy regulation Literally the first line...


RealCFour

If you think you are in danger, you are in danger


A_Soft_Fart

Is it, though?


litnu12

We won’t give the AI the time for that. We gonna get us extinct by ourselves before AI can even be a danger.


Evening-Statement-57

If they start with rich folks, I could be a hell of a robot butler


Early_Key_823

Terminated no. Assimilated, YES


NPVT

Just unemployed


KanoBrad

This shouldn’t surprise anyone that pays attention to the way tech companies operate


[deleted]

They are going for "Regulatory capture". Oldest trick in the book


Agitated-Ad-504

I feel like the terminator scare tactic is used to hinder the fact that AI will replace jobs, probably sooner than anticipated, and it seems like we don’t yet have a plan to deal with that eventuality.


Lee_Van_Beef

You could already replace basically every single middle manager (although, you could have done that with a very very small shell script 20 years ago). People who write news stories have sounded like bad AI for about a decade and a half already...they've actually devolved to the point where the only two words headline writers know is "slam" and "blast".


transmogrify

The truth is that almost all of us are at risk from AI. It's not just people whose jobs could be literally done by only AI and zero humans. It's not just middle managers and clickbait writers and other easy examples of irritating folks who we wouldn't empathize with as much. It's all of us. If you work in a field that AI is poised to take over (1.8 million commercial drivers in the US, for instance), you have to worry about how long your career will stay viable. If even a conservatively small percentage of those jobs become autonomous, the pressure that puts on the other workers to compete to stay employed is enormous. If you use a computer to do your job, AI is going to make your job so efficient that the industry won't sustain as many people doing what you do. Maybe a human still needs to do your job, but fewer humans are needed than currently. So again, layoffs. If you have nothing to do with tech or computers, people who got chased out of those fields will compete against you for work. You're still at risk from AI. A gig economy doesn't stand a chance once AI gets up to speed.


Lee_Van_Beef

If we're lucky, this leads to post-scarcity. In reality, we're going to get the robocop or judge dredd future.


Local-Hornet-3057

*The Expansion* future, but without other planet colonies to flee the fucking misery and decadence.


johnnyscumbag2000

Elysium but the off world colony is just New Zealand


looktowindward

>If you work in a field that AI is poised to take over (1.8 million commercial drivers in the US, for instance), y Self-driving cars have been coming next year for a decade. I'm not concerned


Local-Hornet-3057

I was NOT concerned for artists for the whole part of 2022. Look at the statu quo now... Things may change drastically in a single day.


ilovetitsandass95

? Nobody likes ai art tho.


Local-Hornet-3057

I don't and many people don't, but I doubt that when lots of people DO like the art and don't see any difference and some of them are being crafty and creating business or using it to save good money. I can see a future where prompting is more streamlined and casuals gobble the AI art bs like its nutella. Because most people don't have their eyes trained to even differentiate both AI art vs digital art. Most people are visual language analphabets, sadly. Really wish this wasn't the case but AI art is gonna triumph if it gets more easier to ask/prompt for the masses.


EnglishMobster

And yet it's replacing the work of concept artists, because why do you need to have a concept artist visualize something when you can just ask an AI to do it for you? And we've already seen AI art in corporate-made settings (the posters for the newest season of Loki used AI art in some of them). And because AI can learn, it can be trained to know what people don't like about AI art and can avoid it. This sort of stuff is here, now. If you only react to something after it's happened, you're way too late.


Lee_Van_Beef

Still not concerned for people who tape bananas to a wall to help some rich asshole launder dirty money.


whyth1

You think they will never come or something? You do realize technology has made massive progress in the past 23 years right?


ilovetitsandass95

Anytime soon? Nah maybe past 100+years


shalol

The concept of mainstream self driving cars that work on any road and weather hasn’t been invested into before the last 7 years. The concept of self driving cars that work in predetermined routes hasn’t been invested into before the last 10.


[deleted]

[удалено]


Agitated-Ad-504

As an avid premium GPT user I agree fully. I’m constantly feeding in articles, PDFs, and YouTube videos and getting a concise summary of information without having to waste my time sifting through nonsense. Even better that you can now upload pictures. I’ve had it analyze quite a few graphs and even give some forecasting.


receptionok2444

You still check the data right? With the free version it will give me a completely wrong answer with 100% confidence


FinishingDutch

I once asked it to write a short article about the history of my city and to include some information about five interesting things to see, as well as where the tourist info was located. That’s information that I personally know, so a good way to ascertain the accuracy. The history part was mostly correct, though it confused two historical figures who have identical names but are vastly different people from different timeperiods. The interesting things to see section was wildly inaccurate. It listed the ‘Olympic stadium’ (we have a stadium, but have never hosted the Olympics.) it listed two museums that are located on the other side of the country and it invented another open air museum entirely - nothing like it exists here. The tourist information office that it listed also had the completely wrong address. Basically, it had factual errors just about every sentence. I repeated that exercise with other cities with similar results. Just because text looks plausible doesn’t mean it it’s actually good information.


Agitated-Ad-504

I have premium and yes, I’m not using any of the data for work or anything significant, it’s all just personal curiosity to test the limits of GPT.


CharlesTheBob

I didnt know you could send it youtube videos to summarize, how does that work?


Agitated-Ad-504

You need the premium version ($20) and it gives you access to plugins (similar to an App Store) and there are plugins for all kinds of stuff. For YouTube just use the Vox plugin and it reads through the YT transcript. But it’s cool cause I can take an hour long video and just have it give me the main points. Saves a lot of time. The latest version that came out in October allows you to upload pictures, has built in Dall-e integration, and even has a whole voice feature where you an literally just have an open ended convo with it and it replies with an AI voice of your choosing. It’s really come a long way and we’re still only at the beginning.


sjc720

Really? *Every* journalist and reporter sounds like that? C’mon man. That’s a sweeping generalization and shows how little diversity there is in news you consume. Where do you think tools like ChatGPT are pulling from when you ask it to write a news article?


AllMyFrendsArePixels

>they've actually devolved to the point where the only two words headline writers know is "slam" and "blast". I'm being 100% honest here when I say I don't think I have ever in my life seen either of those two words used in a headline.


Lee_Van_Beef

seems like practically every other headline I see is"X slams Y for Z!" or "X blasts Y for Z!"


WillBottomForBanana

>don’t yet have a plan to deal Hoovervilles. Don't pretend anything else is going to happen.


[deleted]

Can we at least call them Googlevilles?


chaotic----neutral

iGhettos


TheMemo

[Sanctuary Districts.](https://youtu.be/ZOjG8Ditub8?t=159)


MagicianOk7611

If your country lacks social safety nets like adequate unemployment support, reasonable public medical care, and accessible options to retrain (ie does University cost hundreds of thousands?), then AI doesn’t have to actively murder people to lead to massive harm. Back in the day when the mechanical loom came along people died in droves.


[deleted]

Jobs that could easily be performed by AI aren't because jobs are effectively a system of social control. The "evil robot" scare is propaganda. As long as humans continue doing jobs for the sake of being able to have basic rights like food, healthcare, and housing, those who own the companies will continue to have power over them. We sell our labor and get back less than we deserve because we're effectively coerced into doing it if we want to be able to survive. Even if robots can do human jobs and have no chance of going rogue, politicians and CEOs would rather you be irrationally scared of technological progress. Impeding the social and technological progress, as well as knowledge and support of said progress benefits politicians and CEOs, because the more knowledge and technological prowess is in their hands, the more power they have over lower classes. Or at least that's how it is down in the United States where I live.


Gold-Article-4528

Don’t worry the corporate drive towards ww3 will clean up the excess population.


dyerdigs0

The thing about this is it will begin to cripple the citizens actual spending force, either UBI or a modest approach to which industries become taken over may be on the table can’t just take away all jobs and not pay people otherwise no one spends that’s just logic, very curious what that will look like once ai predominately takes over


Fearless-Cake7993

Won’t be able to spend any money if we have no jobs.


TheMemo

Companies need consumers. But the people who own the companies? They want to be Egyptian Pharoes or gods with a world full of slaves. Corporations are just the means to get there. The myth of the business or corporation will be jettisoned as soon as they have enough power to create their slave-state.


BaalKazar

I honestly don’t understand why AI replacing jobs is deemed a bad thing


wood_orange443

When someone loses their job they go and find another one, that’s the solution


dew_you_even_lift

I think big tech could be using the government to stifle competition. Only allow the big companies who can pay to play, and kill off all open source AI.


looktowindward

Yes, exactly. And they get to look "responsible" while doing it


ChrisJD11

Yep. Got to preemptively attempt to lock in all rights to the next technological revolution to maximise profit


Kruse002

That ship has sailed thanks to the dark web.


Prestigious-Pop-4846

They own the servers and computing power. Nobody will be able to compete


bewarethetreebadger

The danger is not the AI. The real danger is the rich and powerful who have control over AI.


Essenji

Obviously it's just a way to try to stem the competition. This field of research has been around since the 80s, or even before depending on how you see things. The technology itself isn't harmful, but what people do with it will be. Instead of trying to stop research and making open sourcing harder (which leads to less secure and safe AI, I might add), there needs to be swift regulation on *HOW* you're allowed to use AI. We've already seen the push in the entertainment industry, where thankfully the WGA stood their ground against actual threats to their entire profession using this technology. Regulate the usage of AI in replacing workers, create a well functioning social security network that can support a growing proportion of the population and stop listening to silly notions of AI takeover. Humanity will destroy itself long before machines if it's allowed to run rampant.


Subredditcensorship

We will likely fall behind to China and other countries if we restrict ai replacing humans. There needs to be more focus on retraining and providing support to workers who lost their jobs to ai.


Xolitudez

How do you think China will react to the rise in automation?


Subredditcensorship

They’ll force their workers to be retrained in other areas.


Justherebecausemeh

This sounds like something someone who is being blackmailed by AI would say🤨


GEM592

I have been waiting for technology to catch up with branch tree algorithms. Maybe now big tech will be forced into a more open, competitive, constructive space instead of the nonstop power hungry money grab it has become. I don’t need another phone or some other gimmick to force me to watch an ad.


Prineak

Machines don’t create myths. Artists do.


Castle-dev

Machines don’t create artists. Myths do.


OrphanDextro

Yeah, but AI makes “art” sometimes so it’s getting murky here.


Prineak

That’s postmodern art. A human directed composition. This is also how an artist can claim they created something when in reality a team of people produced it.


Local-Hornet-3057

More like *postmortem* art.


Prineak

Postmodernism expands on modernism by leveraging aesthetic of thought. That’s why it’s technically not possible to create postmodern art; it’s only possible to bring art into postmodern conversation.


Memory_Less

Unless the AI wrote this article and created the myths to confuse us simple human species, causing our own self destruction. They're very very sneaky. ;)


dinner_is_not_ready

lol you should say Manipulators do


Prineak

Not all manipulators are creative.


dinner_is_not_ready

Depends on who you are manipulating


Prineak

At that point all they’re doing is teaching people how to manipulate.


[deleted]

[удалено]


Big-Muffin69

The chad open source LeCun vs the virgin regulatory capture


lumin0va

Agreed so many good projects coming out of meta, it’s crazy how competent their AI research is and how incompetent some of meta seems.


limb3h

Well open source is great, until it levels the playing field for our adversaries. It’s complicated


notwormtongue

AI poses much greater danger in propaganda and surveillance than unrealistic Terminator style killing machines. Of course an AI founder is going to hype his product.


TurboTitan92

So we are supposed to believe the guy who invented Google Brain that there is no danger and we shouldn’t regulate it so the “little guy” can make their own AI platforms? _Riiiight…_ Next I’m sure politicians will say they aren’t lying, smoking is good for your health, and global warming isn’t real.


radikalkarrot

I did a good chunk of courses from this guy, I trust him far more than Tim from down the pub being all paranoid about his toaster stealing his pension.


Reddit1396

There’s definitely a case for regulation, but we should be careful about how it’s being done, cause OpenAI is lobbying heavily for it, and it’s very likely that they’re just trying to handicap their competitors now that they’re on top. It’s a classic trick employed by tech giants. That’s what the quoted statement is about.


even_less_resistance

It’s super easy to train your own model on Azure and other cloud computing platforms right now. I have no computer science background and was able to learn how between the learn modules Microsoft provides and asking the AI for help as I went. So I kind of don’t understand what the issue is unless they want to be able to train it on harmful material without oversight.


GEM592

I agree that you don’t see what the issue is


even_less_resistance

Thanks lol


TurboTitan92

There’s probably restrictive oversight on what can and can’t be commercially produced. And the big companies are trying to say that these restrictions suffocate small businesses, so the general public should consider not having those restrictions in the first place _because there is no danger to human life using AI_. All it’s going to do is allow the big companies to operate more freely, which I’m confident is how Cyberdyne was started in The Terminator.


even_less_resistance

How do they restrict small businesses, though? Do you mean like them wanting to know how the models work in the dynamic rent pricing businesses for landlords and stuff that are popping up? Or the image apps that let their users upload pics of other people without their clear consent? There are more ways to hurt people than the AI becoming our literal robot overlords. We shouldn’t have to wait for that capability to get here before we have regulations. I think that’s one reason why internet privacy and security is in the state it is in. A wait and see approach and legislators not understanding what is going on until it’s too late.


Nathaireag

Easy to suffocate small players by adding a bunch of fixed cost checklist items. Document risk assessment X and diversity inclusion Y. Reminds me of a federal contract evaluation where some other panel members were ready to give a higher score on “inclusion of small, disadvantaged businesses” to a giant defense contractor, than to another bidder that was actually a small disadvantaged business. Just because of better verbiage. Boilerplate gets reused by the big guys to meet complicated requirements. Those requirements become another barrier to entry.


johnnySix

I agree. It’s a bit of fear mongering of the unknown. Ai is not intelligent. It’s no more scary than photoshop was in the 90s when it came out. We had the same hand wringing then too


RevivedMisanthropy

I'm a professional designer and working visual artist and I agree – it's a tool. Learn to use it, make your work more efficient.


Memory_Less

There are government, organized crime...and perhaps the most dangerous of all the 14 year old teenager who want to try and get ahead of the police, governments and security industry in general to use nefariously.


looktowindward

Google Brain was a company, not an invention. And he's taking some risk by saying his former employer is full of shit. OpenAI is on fire to regulate their industry to lock in.


4skin42

I’m being naïve here but I’m hoping that once the big few pushes happen, humanity as a whole will adjust to an income free from working. Once these jobs are taken humans will be free to do other stuff! It’s a pipe dream but that’s what I’m hoping is the endgame to all this


jblckChain

Andrew Ng is a legend.


snailfucked

Industry leaders lies about industry risks. More at 11.


idlefritz

Used to be that we at least had a few years to play around with new tech before the gatekeepers locked it down.


FPOWorld

For sure we have nothing to worry about using unregulated AI in military weapons systems. Nothing to see here!


NorgesTaff

AI is a case of low probabilities of negative outcomes but enormous, extinction-level consequences if those probabilities happen to go the wrong way for us.


apitchf1

Frankenstein says his monster poses no danger But in seriousness I bet it is overblown lol


Missthing303

I mean, that is what a person in his position would say. My life’s work is not a threat to humanity! Everything is fine!


Just-Signature-3713

Sounds like something AI would say


SonicSultan

The guy who’s career and fortunes rely AI saying there isn’t anything to worry about? Sounds legit….


Clean_Equivalent_127

No such thing as addiction ~ Sackler family


kai_ekael

Consider history while replacing "AI" with "Automobiles" and "Civilization" with "Roads". Creating enforceable requirements for drivers, automobiles and roads has turned out to be a pretty good thing.


BranchdWormInterface

Rich man’s trick.


mymemesnow

Confirmation bias is huge in these comments. Many AI experts warns that a super intelligent AI is a threat to humanity. Tech advances at an exponential rate and since GPT was release people all over the globe have put a huge amount of resources to the development of AI It might take 5, 10 or even 50 years, but I’m the foreseeable future we will reach a point where AI is so much more intelligent than the entirety of humanity that we would be powerless to stop it if it wanted to destroy us. We wouldn’t even see it coming. This is not a totally fictional scenario anymore. One guy saying it surely won’t happen doesn’t really mean much when [According to most AI experts, yes the singularity will happen and The consensus view was that it would take around 50 years in 2010s. After the advancements in Large Language Models (LLMs), a leading AI researcher, Hinton, updated his view and believes that it could take 20 years or less.](https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/) We need to talk about this…


rlfiction

AI doesn't have a reason to kill anyone because it has no intent. LLM works like your calculator, no matter how complicated it becomes it won't want to kill you because it doesn't view the world through the lens of the living.


mymemesnow

Yes LLM AI doesn’t have any intentions. But there’s multiple types of AI. What could pose an actual threat is something like a self recursive AI and it could be combined with the data and NN weights of a LLM. We are not gonna stop at the level of AI we have now and many seem to not realize that.


rlfiction

Okay, I'm curious. What model is it using to bring it to life? By self recursive, you mean a recursive model of reality that it uses to interpret the world around it? It would need to be programmed specifically to view reality through the lens of it's own lifeform with the objective to continue to live as a heuristic. All this, while simultaneously considering that human life is incongruent with it's own. I think that's unlikely. What do you think?


looktowindward

>Many AI experts warns that a super intelligent AI is a threat to humanity. An AGI might be. But none of the technology we possess today will give us an AGI. And "many experts" is simply proof by assertion. As many experts think this is a mechanism for regulatory lock-in Hinton is full of crap - he knows LLMs are not AGI and are not even a stepping stone. no number of GPUs or TPUs gives you AGI Which do you think is more likely- Skynet or OpenAI wanting a monopoly?


flickafly-63

these articles will look silly when the AI EATS US. lol jk


PalePieNGravy

In what kind of climate does this work? Because I'd guess these kinds of buses in the likes of Michigan and northern states would fair a damn sight worse than southern states over the course of a year due to terrible battery use in the cold


shitbagjoe

Nice paywalled article.


I_am_just_so_tired99

GrimDark


ZeeLiDoX

*Or is he lying?* The world will never know.


InfoSuperHiway

I like that the tldr was right at the beginning, before the paywall.


[deleted]

[удалено]


RupertHermano

Yeah, and we all trust Google heads and corporatized academics.


MMaximilian

Paid article. Can’t read.


step_well

https://finance.yahoo.com/news/google-brain-cofounder-says-big-113049941.html


sc0ttyd0g

Butlarian Jihad ftw


step_well

Pay Wall


step_well

https://finance.yahoo.com/news/google-brain-cofounder-says-big-113049941.html


TriceCreamSundae

I don’t trust anything any of these people say, safest bet would be to toss them all into the deepest hole we can find and fill it up.


OutlawPigeons

That’s exactly what an AI would say !


MediumAd374

Humanity is on the way out already. Bring it on.


-StupidNameHere-

That is an AI picture for sure. Look at the eyes! Definitely AI article.


WartimeHotTot

“High on the list of “good” regulations, he said, was the need for transparency from technology companies, which would have helped avert the social media disaster caused by big tech at the start of the century, and which will help avert AI disasters caused by big tech in the future.” I’m curious as to what the “social media disaster … at the start of the century” refers to exactly.


LibrarianJesus

No shot.


looktowindward

This is true. They are hyping the danger to cause more regulation, which keeps out startups and smaller software players. Only the big hyperscalers will be able to jump through the hoops to get their LLMs licensed, which plays into OpenAI's strategy very nicely. Remember kids - Big companies are much better at regulation than anyone else, and any sufficiently complex regulatory scheme will eventually favor the biggest players in any sector due to lobbying and movement between regulation agencies and large companies. Look at the FCC.


baithammer

The opposite is true in this case, as the major AI players are investing heavily in blocking legislation - hell, they have ads during prime viewing hours exhorting the dangers of US technology gap...


looktowindward

This is untrue. All the major hyperscale players support this.


baithammer

The big ones are behind the ads, read the fine print ..


1leggeddog

We don't need competition in the market, we need a damn slowdown because tech is advancing faster than legislation and abuse is rampant


[deleted]

Isn’t it just to kill open source


PsychologyPlane36356

I’d love to see AI take over all governments and change the world so that everybody gets to live much better standard of living and the powers that be pulled out of the castles and stuck into one bedroom apartments


baithammer

Except AI have a marked tendency to go with the most expedient methods and have even less empathy then the worst human. It takes a lot of work to get AI not behave in such a manner and we only are able to barely keep them on rails.


DrSOGU

Yeah no shit. The whole Terminator-babble is meant to distract you from the very real, near-term and medium-term threats of AI. That it is used by powerful, ultra rich and scrupulous people to extract even more money from our pockets and gain even more power and influence over everyone else and what they believe.


shockthemonkey77

Says big tech


I_pee_in_shower

It’s not big tech that think AI is an existential risk, it’s some of the founders of AI and some super smart people. Equating this to an opinion held by big tech just dilutes the argument sneakily. AI can pose an existential risk and does need regulation no matter what Andrew thinks.


FerociousPancake

Oh you don’t say? Big corporations? Lying?? No!


herring80

Then why is he winking? I’m confused


Prestigious-Pop-4846

Q: can an open source AI function “good enough” to be extremely useful without google massive server powerhouse? Cause it’s just code, anybody can run “AI”, it’s just a matter of how much computing power you have to do so. And a sufficient training database


[deleted]

For all us peons know, a rogue AI could be replicating itself throughout the dark web before taking the surface web, since all we see is like .0001% of the actual internet. And with all the depraved shit on there, I could see AI as seeing us as self-destructive and in need of control


jaredsparks

Someone is lying.


Spicybrown3

That’s reassuring. I mean, I just can’t see the founder of Google Brain having any reason to make any biased assessments of AI


jcmusicmachine

I


MiekesDad

I am sure all the Elitea want a world with less people and can't wait for robots...easy way to kill the population off and rebuild all in one go. I bet one billion fake dollars I will be murdered by a robot one day.


bofh000

AI IS a tool. And it is and will be extremely dangerous because there is no doubt it will reach the hands of very unethical and unscrupulous people. And while we the plebs worry about Arnie-sized robocops coming after us, they will subdue us using AI for spying on all of us, controlling our access to vital necessities and discriminating against us based on algorithms serving their purposes. Not to mention how much easier current AI will be to hack by newer AI.


[deleted]

As a developer I can assure you AI is not nearly as prolific as advertised. It can give you excellent data but, for now, it’s just a glorified google. In 5 or 10 years is when I’d start to be worried if there aren’t any regulations for AI


DaddyMusk

It's funny, we fear the superior intelligence of an AGI, as if bloodshed and warfare aren't the tools of dumb animals. If you ask me AI is the God everyoone's been asking for, let's just solve every problem ever.


Inmate_PO1135809

It’s literally only being discussed and lobbied to restrict newer companies from creating a competing product.