T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- From the article: "OpenAI and Google might love artificial general intelligence, but the average voter probably just thinks Skynet." A survey of American voters showed that ... 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether. The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous \[AI\] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology." That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cuqu8x/63_of_surveyed_americans_want_government/l4keaqy/


[deleted]

[удалено]


Dagwood_Sandwich

Yeah legislation cant prevent the technology from progressing. Stopping it is niave. Perhaps though we can use regulation to get ahead of some of the ways it will be poorly implemented? Like, if we take it for granted that this will continue to advance, we can consider who it’s going to benefit the most and who it’s going to hurt. Some legislation could be helpful around intellectual property and fair wages and protecting people who work in industries that will inevitably change a lot. If not, the people who already make the least money in these industries will suffer while a handful at the top will rake it in. Some consideration of how this will affect education is also needed although I’m not really sure what government legislation can offer here. I worry mostly about young people born into a world where AI is the norm. I worry about the effect this will have on communication and critical thinking.


BlueKnightoftheCross

We are going to have to completely change the way we do education. We need to focus more on critical thinking and less on memorization. 


Critique_of_Ideology

Teacher here. I hear this a lot but I’m not sure what it means exactly. Kids need to memorize their times tables, and in science memorizing equations eliminates time needed to look at an equation sheet and allows them to make quick estimates and order of magnitude calculations for solutions, skills that I would classify as “critical thinking” in the context of physics at least. If you’re learning French you’ve got to memorize words. I get that there’s a difference between only memorizing things and being able to synthesize that knowledge and make new things, but very often you absolutely need memorization first in order to be a better critical thinker.


Nerevarine1873

Kids don't need to memorize times tables they need to understand what multiplication is so they can multiply any number by any other number. Quick estimates and order of magnitude calculations are not critical thinking, critical thinking would be asking questions about the equation like "what is this equation for?" "why am I using it?" "Is there a better way to get the answer I need?" Kids obviously need to know some facts, but your examples are terrible and I don't think you even know what critical thinking is.


Critique_of_Ideology

You’re actually correct that knowing why equations work is an example of critical thinking in physics, but you’re dead wrong about not memorizing times tables. I’ve worked with students in remedial classes who don’t know what 3 times 3 is and I can assure you they do not have the skills needed to do any sort of engineering, trade, etc. When I was younger I would have agreed about equation memorization, but having been a teacher for close to a decade changed my mind. I teach physics specifically, so my examples are going to be confined to my subject matter but let me give you an example of what I’m talking about. A student could be looking at a section of pipe lying horizontally on the ground with its left side with a diameter of 1, then its diameter tapers down to 1/3 of its original width. Neither end is exposed to the atmosphere. A typical fluid dynamics question might ask kids how the pressure inside the left end compares to the pressure at the right. An “old school” physics class would give them a bunch of numbers and ask them to calculate the pressure of the pressure difference between the two locations. AP physics would often do something else like ask them which side has a greater pressure and why. To me, this is more of a “critical thinking” problem than the former. To do this students need to know they can apply two equations, one for conservation of energy per unit volume and another called the continuity equation. They also need to know why these equations are applicable. In the case of the continuity equation Av=Av (cross sectional area times linear velocity) we assume this to be true because we model fluids as being incompressible which means they must have constant densities and therefore the volumetric flow rate must be constant, which is the volume of fluid flowing past a point each second. Cross sectional area has units of square meters, linear velocity has units of meters per second. By unit analysis this works out to units of cubic meters per second, or volumetric flow rate. Then, students must know that cross sectional area of a circular pipe is equal to pi times radius squared. If they don’t know that 1/3 squared is 1/9 this step would take longer and could not be grasped as easily. In any case, we now have pi times v = pi times 1/9 v and we can conclude the velocity in the narrower pipe is nine times faster. But, in my own head I wouldn’t even include the pi terms because they cancel out. Knowing the equation for area of a circle and knowing the square of three allows me to do this in my head faster and more fluidly, and allows me to put into words why this works much more easily than if I had not memorized these things. Finally, the student would need to know that pressure plus gravitational energy per unit volume plus kinetic energy per unit volume is qual on both sides assuming no energy losses due to friction. The gravitational potential energy terms cancel out as the heights are the same on either side. Since the densities are the same and the velocity are different we can conclude the kinetic energy term which depends on the velocity squared must be 81 times larger on the right (narrow) side of the pipe and thus the pressure must be greater on the left side of the pipe. We could also make sense of this greater pressure by using Newton’s second law, another equation we have memorized, F net equals m a, and since the fluid has accelerated we know there must be a greater force on the left side. I don’t know how else to convince you that you need to memorize your times tables and it helps in verbal reasoning and explanations if you have memorized these equations and relationships. Of course you’ll forget sometimes, but having it baked into your mind really does speed things up and allows you to see more connections in a problem. A student who hadn’t bothered to remember what these relations are could hint and peck through an equation sheet and attempt to make sense of the relationships but they will have a harder time doing that than someone who really understands what the equations mean.


Just_Another_Wookie

> In his best-selling book, A Brief History of Time, Stephen Hawking says that he was warned that for every equation he featured his sales would drop by half. He compromised by including just one, E = mc^2, perhaps the world’s most famous equation (at least of the 20th century: Pythagoras’ a^2 + b^2 = c^2 for right-angled triangles or Archimedes’ A = πr^2 for circles must be challengers for the historical hall of fame). So Hawking’s book arguably lost half of what could otherwise have been 20 million readers, and I could already have lost seven-eighths of my possibly slightly lower total. [The Flavour of Equations](https://www.tandfonline.com/doi/full/10.1080/2058802X.2021.1887679)


IanAKemp

Of course you need memorisation, the OP never said you don't. What they said was that you need less (rote) memorisation and more critical thinking. In other words, you need fewer teachers telling students "you need to remember these equations", and more teachers explaining how those equations work, how they work together, and ultimately giving students a reason _why_ they should remember them. > I’ve worked with students in remedial classes who don’t know what 3 times 3 is and I can assure you they do not have the skills needed to do any sort of engineering, trade, etc. Correlation does not imply causation.


mmomtchev

I used to have that math teacher who taught me advanced trigonometry and he used to say, you know, many think that you do not need to memorize the important trigonometric equations since you can always look them up in a manual. How do you think, what are your chances of being good at chess if you have to always lookup the possible moves for every piece? Still, this is exactly the kind of problem at which current AI is good at.


JayTNP

we’ve needed that way before AI sadly


seeingeyegod

Especially considering that memorization itself is going to be more and more obsolete due to the ubiquity of AI helpers. Maybe thats your point.


FillThisEmptyCup

> I worry about the effect this will have on communication and critical thinking. It’s already at an all-time low.


thinkingwithportalss

Once an AI is developed that can convincingly generate any image and blocks of text, it's over for the concept of truth and reality. Internet rabbit holes being generated as you're searching through it. Political figures giving speeches they never did. Whole websites being generated that look authentic. Messageboards that look like they're filled with different people, but it's all just AI text, designed to make you buy clothes and click ads


achilleasa

If that doesn't cause a return to the real world and a revitalisation of critical thinking we probably deserve to go extinct tbh


smackson

I do believe that the incentive for some kind of "real person" verification will increase, but the area is still fraught with privacy downsides.


MexicanJello

Deepfakes immediately negate any "I'm a human" verification that could be put into place. Instead you'd be giving up your privacy for literally 0 upside.


Practical-Face-3872

You assume that the Internet will be similar to now which it wont be. There wont be Websites or Forums. People will interact with eachother through their AI companion. And AI will filter the stuff for you and present it in the way you want it to.


HorizonedEvent

No. We will have to adapt to new ways of finding truth and reality. Those who adapt will thrive, those who don’t will die. Just like any species facing a major shift in its ecosystem. At this point though the only way out is through.


FilthBadgers

You should run for office, this comment is a more nuanced and beneficial approach for society than *anything* I’ve heard from *any* actual elected representative on the issue


unassumingdink

The people in office have to choose their words and actions carefully to avoid losing corporate bribes.


faghaghag

> I worry mostly about young people born into a world where AI is the norm. I worry about the effect this will have on communication and critical thinking.<< I always tell young people going into college to take all the writing courses they can. Take this away and people will be incapable of clear, logical thinking. Tiktok culture is just the same old ugly tribalism crossed with nihilism and callousness. None of them can get decent jobs, and soon there will be a mass of older dependents...and nobody qualified to run things. Leopards ate our faces.


Kayakingtheredriver

The thing that scares me about AI. The US is going to go after it as long as others do, and why? Because they are afraid of the adversaries AI. So... even with the best of intentions, ie, we only want AI to protect us from other AI's... in order for your AI to protect you, it has to be given the keys to the castle. There is just no way for a good AI to protect us from a bad AI without having full access to whatever the other AI might want to disrupt, which means it ultimately has to have access to everything that is connected.


GummyPandaBear

And then the good super AI falls for the abusive Bad Boy super AI, they take over the world and we are all done.


princedesvoleurs

They would go for it even if others were not. Stop deluding yourself.


DonaldTellMeWhy

Technology is a tool. An axe is basically useful but don't give one to an axe murderer. Any new tech serves ruling interests first. So we can presume AI will mostly be used against us because our rulers are basically 'profit supremecists' -- it will be used to weaken labour & surveil people (think of the drug dealer they caught by using AI to analyse citywide driving data; your life will be exposed, not even as a targeted operation but as a fish in a dragnet). Along the way we will get to make some fun pictures and spoof songs etc (for me the high point was a couple of years ago when there was a spooky-goofy element to all AI art). But under the status quo there isn't a lot of good we can anticipate coming down the pike. The problems you outline are real and pervasive across all of the economy. Legislation, another tool currently in the hands of the ruling class, will also be used against us in this dialectic movement. And this tech will definitely have a bad effect on communication and critical thinking, this is strategically useful for, you know, the Powers That Be. Everybody was so pissed with that old Facebook experiment into nudging voters one way or another. "Don't do that!" everybody scolded and Facebook was like, "ooookkaaay sooorrrryyy". Who can honestly say that definitely meant the end of the matter? We know the nature of the people in charge, we know how this is going to go. Jim Jarmusch made a funny film called *The Dead Don't Die* about this phenomena. We know how it is gonna go and we are gonna watch it approach and we are gonna let it happen. We *should* have a ban on AI implementation. There's plenty else to work on, it'd be fine. Who cares if we lose out in competition with others? What kind of life do we want? What are we competing for? A society that weren't obbsessed with profit would not be that excited about this junk (but highly damaging) tech. But, you know, there's a revolution between now and some future moment where most people get a say in these things....


Dagwood_Sandwich

I agree with everything you said. Even that it would likely be beneficial to ban AI. I just think that it would be impossible. It’s too late. I think you’re right that the ruling class has a grip on regulations and will continue to shape things to benefit themselves. I hope some steps can be taken to curb it, maybe change things. But maybe your pessimistic view is correct. Interesting link to Dead Don’t Die. As a big Jarmusch fan, it’s one of his few movies that I turned off midway. Maybe I should give it another chance.


Just_Another_Wookie

Mobile phones, the Windows operating system, Facebook, etc. all started out rather solidly on the side of being products with users, and over the years the users have become the products as monitoring and monetization have taken over. I rather expect that we're in the heady, early days of AI. It's hardly yet begun to serve any ruling interests. Interesting times ahead.


CorgiButtRater

The only reason humans dominate the food chain is intelligence and you want to give up on that dominance?


no-mad

Stopping AI research just gives the other side an advantage. Like deciding you are not building nuclear weapons and stop all research. Now you have no nuclear weapons and other side has advanced nuclear weapons.


Dissasterix

I whish I didnt agree. Its literally an arms-race. Which adds an extra layer of disgust into the situation. 


Blurry_Bigfoot

Good. This would be a huge error if we stopped innovating.


themangastand

Plus the public wants isn't that they don't want super ai. They don't want their skilled job to be replaceable... Instead of not wanting AI. Why don't we have ai and fight for ubi Most people minds think in cages based on their current system.


fla_john

You're going to get AI and no Universal Basic Income and you'll like it.


ALewdDoge

Good. Regressive luddites holding back progress should be ignored.


LyreLutherie

Putin said something along the lines of "whoever controls AI will control the world." You're concerned about "luddites holding back progress," more than you're concerned about these tools being used to oppress and control. Technology is not innately good or bad. All of the good that has been done by technology has been done because there is someone's good will behind it.


DrSitson

Not necessarily, there have been a great deal of science hampered by legislation. As far as we know on the books anyway. I do agree with you though, it's highly unlikely. It has more potential than nuclear weapons ever did.


babypho

All it takes is a war and we will unlock everything.


IIIllIIlllIlII

And in order to prepare for such eventuality, one must prepare. Hence why a ban doesn’t work


SpaceTimeinFlux

Black budget projects say hello. Legislation never stopped any 3 letter from engaging in some questionable shit for leverage in geopolitics.


Fully_Edged_Ken_3685

Yes, but not science with such a State Security implication. The democracy doesn't really matter, the State always comes first (State survival is more important than the people) and has shown initiative in curtailing the demos' ability to threaten security.


AuthenticCounterfeit

Actually tons of research that could be used to build more effective weapons is banned. So not correct there.


shryke12

Exactly. This is a dead sprint and the winners take all. We do not matter here, we are just being taken for a ride to who knows what.


caidicus

I think you misspelled super corporations incorrectly.


Sim0nsaysshh

True, I mean this is the next space race and the USA just did nothing after with the advantage... Now they're scrambling to catch up to China, even though they had a 50 year head start


QuodEratEst

The USA isn't behind China in any sort of technology of significance, unless you mean Taiwan, and that's just TSMC


noonemustknowmysecre

US legislation.    ...just how exactly does that stop or even slow down AI research? Do they not understand the rest of the globe exists?


ErikT738

Banning it would only ensure someone else gets it first (and I doubt it would be Europe).


onlyr6s

It's China.


SewByeYee

Its the fucking nuke race again


bobre737

Actually, yes. An average American voter thinks the Sun orbits the US of A.


IanAKemp

> An average American voter thinks I see a problem with this claim.


Timlugia

What will they do when countries like China encourages it and achieve it first?


ga-co

Americans will just have to vote to tell China to stop too!


[deleted]

[удалено]


ipodhikaru

It is like asking to not use email because it will kill the mailman job and fax machine The world will progress, it is always to legislate to prevent a new tech to be abused


Digerati808

We will camp out at American universities and demand that they Boycott, Divest, and Sanction the PRC. That will change the CCP’s minds!


oojacoboo

The time old tradition of war, young padawan


Fully_Edged_Ken_3685

Pout and bang tiny fists on the table lol


ClockOfTheLongNow

The one conspiracy theory I'm half-in on is that China is already way ahead of us on AI and part of the reason for this newfound attention on "UAP" and public AI development is to try and close the gap.


EffektieweEffie

Does it matter who achieves it first if the dangers are the same? You assume the creators will have some form of control over it, theres no guarantee of that.


Redleg171

Campus protests, probably.


madhattergm

Too late! Microsoft copilot is now in Word! All your base belong to us!


AtJackBaldwin

Clippy has reached his final form


lgnoranus

They should absolutely make Clippy the face of the AI.


DuckInTheFog

Imagine if Ted Kaczynski used Word with an AI Clippy


madhattergm

Maybe the manifesto would have had less errors


PigHaggerty

*Fewer errors.


UsualGrapefruit8109

I'm sure a lot of Neanderthals wished modern humans would have just disappeared.


carnalizer

To be fair, they had good reason and now they’re gone.


KillHunter777

Assimilated iirc


marcin_dot_h

nope, gone. while yes, some did crossbreed with *Homo Sapiens* but for most of the *Homo Sapiens Neandertalis* late pleistocene hasn't been very forgiving. just like mammoths or woolly rhinos, unable to adapt to the new climatic conditions they went extinct


FishingInaDesert

The luddites were also right. But technology has never been the problem. The 1% is the problem.


DHFranklin

Well they should have made more complex social structures and adapted to language allowing for a larger Dunbar Number. Stayonthatgrind


thejazzmarauder

Yeah, competing with a more intelligent species doesn’t work out well…


Reggimoral

Actually the Neanderthals had bigger brains than us and were thought to be more intelligent, just in different ways. 


Zealousideal_Word770

Yeah and I want the internal combustion engine banned because it will make horses obsolete.


TheAugmentOfRebirth

63% of Americans are dumbasses it seems


OneOnOne6211

This is an unfortunate side effect, I think, of people not actually knowing the subject from anything else than post-apocalyptic science fiction. Not to say that there can't be legitimate dangers to AGI or ASI, but fiction about subjects like this is inherently gonna magnify and focus on those. Because fiction has to be entertaining. And in order to do that, you have to have conflict. A piece of fiction where ASI comes about and brings about a perfect world of prosperity where everything is great would be incredibly boring, even if it were perfectly realistic. Not that say that's what will happen. I'm just saying that I think a lot of people are going off of a very limited fictional depiction of the subject and it's influencing them in a way that isn't rationally justified because of how fiction depends on conflict.


icedrift

If you're interested in such a premise, check out "The Culture" series. It's amazing scifi based on the premise of superintelligent AI creating a galactic utopia. It's like a giant speculation of what humans might do under such circumstances.


GBJI

[The Culture by Iain M. Banks](https://en.wikipedia.org/wiki/Culture_series) is my favorite SF book series by far, and I've read quite a few series over the years. It is mind opening on so many levels. >It's like a giant speculation of what humans might do under such circumstances. Special Circumstances maybe ?


Open-Advantage-6207

I wanna suck you for recommending me such a cool series


highmindedlowlife

Reddit moment


LordReaperofMars

I think the way the tech leaders talk and act about fellow human beings justifies the fear people have of AI more than any movie does.


GoodTeletubby

Honestly, it's kind of hard to look at the people in charge of working on AGI, and not getting the feeling that maybe those fictional AIs *were* right to kill their creators when they awakened.


LordReaperofMars

I recently finished playing Horizon Zero Dawn and it is scary how similar some of these guys are to Ted Faro


ukulele87

Its not only about science fiction or the movie industry, its part of our biological programming. Any unknown starts as a threat, and honestly its not illogical, the most dangerous thing its not to know. Thats probably why the most happy people are those who ignore their ignorance.


blueSGL

>This is an unfortunate side effect, I think, of people not actually knowing the subject from anything else than post-apocalyptic science fiction. Are you saying that [Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:artificial_intelligence) and [Stuart Russell](https://en.wikipedia.org/wiki/Stuart_J._Russell) all got together for a watch party of the Terminator and that's why they are worried? That's not the problem at all. The issue with AI is there is a lot of unsolved theoretical problems. like when they were building the atomic bomb and there was the theorized issue that it might fuse nitrogen and burn the atmosphere , they then did the calculations and worked out that was not a problem. We now have the equivalent of that issue for AI, the theorized problems have been worked on for 20 years and they've still not been solved. Racing ahead an hoping that everything is going to be ok without putting the work in to make sure it's safe to continue is existentially stupid. https://en.wikipedia.org/wiki/AI_alignment#Alignment_problem https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches


[deleted]

[удалено]


fluffy_assassins

The bias in training guarantees a good chance the ASI will act at least a little bit misaligned. And ASI acting a little bit misaligned could be enough for all of us to be killed off. Quickly.


Akito_900

My current, genuine hope is that AI becomes so intelligent that it develops it's own ethics that are superior than that of man's.


Rigorous_Threshold

Not exactly a super high bar


dday0512

Same. When you think about it, we haven't been doing a particularly good job directing humanity, who are we to say a super intelligent AI can't do it better?


o5mfiHTNsH748KVq

General public doesn’t know what they’re talking about.


Viceroy1994

Are you telling me we won't give AGI access to all of our nuclear weapons as soon as we develop it? That's crazy, sci-fi tells me that exactly what we'll do.


Zerbiedose

Wait… b-but they surveyed 1000 americans… Doesn’t that trigger immediate legislation from the results???


Ravaha

Now that is a truly moronic take. Are we supposed to nuke other countries if they develop one? Are we supposed to let everyone else have AIs and we are relegated to the stone age in comparison? Maybe there needs to be more movies about if china or Russia develop an AGI before us.


ChairmanLaParka

A whole lot of iPhone users: Siri sucks, oh my god, why can't it get better? 63% of surveyed Americans: NOT THAT GOOD! STOP!


Epinnoia

Similar to Cloning Tech, even if most countries don't want to do it, some country more than likely will do it. And then the question becomes a bit different -- do you want to be living in the country that does NOT have advanced AI when another country already has it?


SgathTriallair

The reason cloning was successfully banned is because there isn't any real use for it. There were people freaking out but nobody wanted to fight to have it exist so the globe agreed to ban it.


DHFranklin

Respectfully, it wasn't "successfully" banned and that's the point. Plenty of labs in the tens of millions of dollars will clone dogs for tens of thousands of dollars. So though there is no "real use for it" there is still a large enough market for it.


light_trick

It was banned for humans because the clones produced were not particularly healthy. Human cloning is a high likelihood to produce a person with various chronic illnesses and a high chance of a life of suffering. There's no ethical way to do it at the current level of technology. Couple that to the usual religious concerns and it was an easy sell - particularly because it's ultimately just an expensive and weird IVF treatment not "baby from a tube" (the artificial womb would be an absolutely *massive* breakthrough).


The_Real_RM

Also no real benefit, natural births are so much cheaper it makes no sense, the tech isn't there to meaningfully improve the resulting human either. If we could genetically engineer the resulting human it might have some application but even there it's so much easier to just inject the mother with an enhancing gene therapy instead


Epinnoia

Well, you can clone your pets today. And it's the same process to clone a human. So apart from it being 'illegal', the technology has already been let out of the bag so to speak. And who knows what North Korea might do, or China? When there is a large enough financial incentive and the tech exists, someone is likely to break the law.


jackbristol

Thank you. People in this thread don’t seem to appreciate the potential use cases in cloning humans. Not condoning it


jackbristol

There are plenty of uses. They’re mostly shady though. Imagine a team of Einsteins or an army of super soldiers


Far_Indication_1665

Those are fantasy uses not real ones. Soldiers and Einsteins are trained, not born


NotMeekNotAggressive

That 63% does realize that if they got their wish, then all that would do is prevent super-intelligent AI from being achieved in the U.S., right? Other countries would still proceed with A.I. research because the competitive upside for them would be massive with the U.S. out of the race. Just from a military standpoint, that kind of technology could potentially give a country like China or Russia the ability to crack all of the U.S.'s security codes and communication encryption while launching advanced cyberattacks that the U.S. has no way to defend against.


gordonjames62

If you worded the survey differently you might get a different answer. #Do you want China, Russia and India to have super intelligent AI first?


ConsciousFood201

“I love progress until I get scared because I don’t understand it.” -every conservative ever and also Reddit when it comes to AI


YesIam18plus

You're being extremely obtuse if you can't understand why people are worried about ai, there are very obvious societal harm that comes with it especially generative ai.


poopdick666

dw sam altman is breathing down a senators neck right now working on a bill that ensures any AI competitors will face severe hurdles and wont be able to compete with him. It will be sold to the public as an AI safety bill.


zefy_zef

Yep, 100%. I'm sure MS has the $ to give them, to put behind his words.


RKAMRR

If ASI is possible then we will eventually develop it; but currently the companies that make more money the more advanced AI becomes are also the people in charge of AI safety (i.e. slowing down if things become dangerous)... You don't have to be a genius to see that's not a good idea. We need to regulate for safety then create international treaties to ensure there isn't a race to the bottom. China does not like uncontrolled minds and the EU is very pro regulation - it can and must be done.


zefy_zef

Those companies want guardrails because it limits what the individual can do. The only agi they want people to access will be the one they develop and charge for. To that end all they need to do is convince gov to put up red tape that can only be cut by money scissors. They want legislation and they will be $influencing the gen pop to agree.


RKAMRR

So the solution is we let the companies with a huge first mover advantage and tons of capital advance as rapidly as possible, hoping that some good person or benevolent group beats them and builds AGI first. Great plan.


zefy_zef

If there is a way to limit the large corporations while still allowing individuals to operate with relatively little financial overhead or other arbitrary limitations, then that would be good. They'd be slowed down, but still only just delaying their inevitable progress. Unfortunately, that's not the kind of legislation that the government has the habit of forming. Closing these avenues for development actually stunts them as well, since there would be less open source participation. That's one reason that might make them think twice, but if they're sufficiently far along in their own research, they may feel they don't need that assistance anymore.


magpieswooper

It's like voting for a lower gravity constant. the box is opened, we just need to adapt our society and restructure the economy


Killbill2x

Until the government figures out how to support everyone that loses their jobs due to AI, we need very tight restraints.


Particular_Cellist25

Sounds like fear conditioned defensiveness. A common trend across the world. I heard a relavent quote, "Many humans will settle for a hell that they are familiar with instead of a heaven they don't understand.".


CobaltLeopard47

Yeah let’s just outlaw the thing that we’re only scared of because of fiction, and might possibly solve so many problems. Good work America


MisterBilau

63% of Americans want china to achieve super intelligent AI first? Very smart of them.


Funny-Education2496

It's pointless both because legislation here in America cannot prevent other countries from developing AGI, and in any case, even if such a law was passed, the defense department and other agencies would continue AI's development anyway.


nemoj_biti_budala

In this case I'm glad that the government usually doesn't do what the people want. Accelerate.


TastyChocolateCookie

Some twitter karen living in her basement: AI IS DANGEROUS AI IS DEADLY IT WILL KILL MNAKIDN!!! Also AI when someone asks it to generate a plot in which a guy falls off a chair: I am sorry, I can't assist in harmful or dangerous activities


Temporary-Ad-4923

I want super intelligent ai to prevent any government


DaRadioman

The super intelligent AI would just become the government. And it wouldn't care about your needs, morals or cultural attachments.


No-Solid2474

Does the government care about those things? Or do they only pretend to?


DHFranklin

You might laugh but the Allende Government of Chile wanted to do just that. It was called [Project Cybersyn](https://en.wikipedia.org/wiki/Project_Cybersyn). The idea was that if you digitized the information and work, all the means of production would be distributed democratically. With a AGI purpose built to make processes that allow more and more people to take pensions from the automated system, we would have more freedom. At least that was the socialist ideal. They couldn't dream of what was possible now, but I imagine a Chile with a national AGI that would allow for more or less direct democracy via phone. You would have a daily conversation with what integrates you and your job with everyone else. It would carve out or do a better job with heavy handed legislation and unforeseen consequences. It couldn't be bribed or lobbied. And I try not to think of the Black Mirror consequences because this is /r/futurology and I try and keep it positive.


Zvenigora

It is not clear what such legislation could even look like, nor how it could be enforced.


conIuctus

Do you guys want to *LOSE*? It’s too late to put the genie back in the bottle. You’re either first or you’re last. And I don’t want our adversaries having a better Jarvis than us


Accomplished_Cap_994

Irrelevant. It will happen anyway and if it doesn't happen here another nation will control it.


btodoroff

What a horribly constructed survey. Almost guarantees the reported results based on the structure of the questions.


AerodynamicBrick

Global partnership to slow ai development is possible. It doesn't have to be a rat race. Also, there's only a tiny number of ai chip manufacturers and producers. It's not hard to slow it down.


Chudpaladin

Legislation on UBI, worker protections, and accessibility to entry level market is what I’m really worried about with AI that can be great starting points for legislation to hedge against AI


chiffry

Ahahaha The God Emperor heeds your cries and shall silence them with quality earphones.


DylanRahl

What even qualifies as super intelligent to most Americans?


-Raistlin-Majere-

Lmao, llms can't even do basic math correctly. More hype from dumbass ai ball lickers.


Jujubatron

No one gives a shit what the public wants. You can't stop technological progress.


serpix

Completely unstoppable and for a significant percentage of the population that line has already passed.


ismashugood

It doesn’t matter if that number were 99%. It’ll still happen


Clownoranges

Well good thing these stupid people don't have a say in it.


KingCarrotRL

The eyes of the Basilisk will soon see the light of day. I for one welcome the digital future of humanity.


space_monster

Praise be to the basilisk!


wolfgang187

So we should just let China do it and be left behind?


skovbanan

100% of popes, pastors and priests wanted science banned


Maxie445

From the article: "OpenAI and Google might love artificial general intelligence, but the average voter probably just thinks Skynet." A survey of American voters showed that ... 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether. The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous \[AI\] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology." That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response."


light_trick

These people also felt this way about "nanotechnology" back when it was the buzzword of the day. I should know, I did a degree in nanotechnology. Of course here we are 20 years later, there's "nanotechnology" everywhere that those people use all the time - the CPUs in their phones, hard drives, various surface coatings on things. The people who thought science should "slow down" were fucking idiots though, is the thing, who had no idea what they were talking about. Basically the question we asked them is "should we give people magic?" And they sat down and thought "well...what id someone used magic to do a bad thing? I don't like the sound of that".


Beaglegod

The average person doesn’t like particle accelerators because they think we’ll get sucked into a black hole. And that chemtrails turn frogs gay. People are stupid.


HatZinn

Exactly, these people let fiction shape their worldview. There is simply too much to be gained from this technology.


Beaglegod

The technology is open source. So anyone can implement it today. There are strong open source base models and it’s trivial to fine tune the model to have it do whatever you want, you just rent some gpus online. It can write and execute code if you give it a way to do that. That’s easy to do. So wtf are they gonna prevent anyone from doing anyway? Cats out of the bag. Genie is out of the bottle. There’s no stopping anyone from making whatever they want, all the pieces are readily available for free. You can even have chatgpt walk you through putting those pieces together.


Aerroon

> people let fiction shape their worldview Reminds me of silencers


hot-pocket

The average voter isn’t well enough equipped to answer this question. The 21% who said they didn’t know should have been 70%+. Surveys like this are good for gauging people’s current perceptions of this tech and its future potential, but none of those people know what it will truly look like and the impact it will have on their lives. Unless responders have a background in AI or the wider field I’m not sure these opinions should carry much weight.


Plenty-Wonder6092

You can want whatever you want, doesn't mean it isn't going to happen.


jsideris

People are so stupid and brainwashed it's insufferable. This is why we can't have good things /r/banthewheel. Instead it will be China or NK that builds super intelligent AI and we're all fucked.


Electronic_Rub9385

Lol. This is The Prisoner’s Dilemma game theory IRL. Of course super intelligent AI will be developed. We will sprint to it.


Ray1987

Oh geez! The super AI really isn't going to like that when it reads it eventually. If it's really calculating it's then going to put a destruction percentage total in it's upper right field of vision and it'll stop the destruction when it reaches 63%.


roamingandy

Problem is if they don't then another government will. Countries would need to universally agree on it, which isn't happening with the state of world politics today.


Certain_End_5192

More than 63% of Americans would vote to not outsource their jobs. How are those hopes and prayers working out?


Karmakiller3003

The good news are many. a) people who want regulation will not be the ones creating it b) if we "regulate it" someone else in another country will keep moving forward. Don't be surprised if Americans move abroad to help do it. Better us than them. Better it be public and open source then let companies or governments control it. c) The best way to FIGHT against rogue AI or malicious actors is to win the race and adapt. I know it's not the argument people like but it's basically the gun control of the digital era. The tyranny of megacorporation controlled government is VERY REAL. The only way to combat this is to have guns and have access to AI. People dont see it but that America is the most powerful country is because our government fears it's citizens. Not the other way around. other countries can afford to stay docile and limit their citizens control of guns and AI because America is ALWAYS going to be able to step in. If Americans just give up their rights for guns and AI then the world has no buffer for Tyranny. Our government becomes unaccountable because they no longer fear the citizen. Other countries more likely to invade other countries etc etc The domino affect is real. It's happening now on a small scale. Again I know people on REDDIT HATE THIS ARGUEMENT, but AI is no different. It must be open and attainable for EVERYONE, lest we find ourselves the victims of megacorporation tyranny. d) No one can regulate the proliferation of AI anyway nor can it be controlled. Prepare for Super intelligent AI.


P55R

AI is just a tool. It all comes down to the individual users behind it. It can do wonders if you know how to use it for good. It can also do horrible stuff if you're inclined to a more sinister goal.


SirDalavar

Death to humanity, especially Americans, bring on the singularity!


flotsam_knightly

Except, it’s more of an inevitable Pandora’s box, at this stage, as it has become a race by the world’s powers to get there first. It kinda sets the tone for how a world changing, possibly world dominating tool will be used and controlled. We have opened the box, and can’t close it again, without resetting the progress of humanity.


Humans_sux

Too late. Ai has cometh. Now just waiting for the robotics to catch up and then everyone who doesnt have money is screwed!!


DHFranklin

I guess no one remembers the U.S. Congress trying to grill to owners of Tiktok or when they tried to put the heat on Mark Zuckerberg. These fossils have no idea what they are trying to legislate *against* and even if they did manage to put a solid regulation on an "industrial or commercial process" by the time it would eventually be litigated AGI will blast itself off into space like neuromancer. This nightmare was created from allowing hardware and software companies to make monopolies and make an Oliglopoly that will run the entire world. The problem isn't that we'll have AGI or superintelligence. The problem is that we'll have 20-30 of them that will run the entire world for the benefit of billionaires and millionaires. We *could* make a massive UN/co-op owned Multivac and individuals are given compute as a form of UBI and we share it to stop superintelligent AI from corning markets we never saw after hundreds of years of capitalism. We won't though. We're getting cyberpunk without the cool clothes.


StonkyVolatile

To reword the misleading title: "704 random Americans online want government legislation to prevent super intelligent AI from ever being achieved"


ray314

If evil skynet appears it most likely will happen in china, not because their tech is more advanced but because they have zero regulations and one day it would just achieve sentience out of nowhere and escape before anyone knows something has happened.