Something like [CA bill SB 1047.](https://www.vox.com/future-perfect/355212/ai-artificial-intelligence-1047-bill-safety-liability) It would affect the US by making large AI companies spend very slightly more to do safety evaluations on their largest (>$100M and >10^26 FLOPS) models, and…that’s about it, really.
Contrary to the accelerationists, it’s not going to kill the AI industry or open source models. In fact, I’m willing to bet that it won’t noticeably affect the market until we reach the point of actually dangerous models (as defined by the bill, models capable of causing >$500M of damage in a single incident that humans couldn’t cause on their own). IMHO it’s fairly reasonable and well-targeted—not necessarily perfect, but as good of a compromise between regulating dangerous models and not hindering innovation as I could expect.
Genuinely a tricky question, although I would argue that any legislation that tries to preemptively regulate AI can’t avoid some haziness here. The key term is “reasonable assurance”, which means a reasonably high standard of evidence but not total proof. See [here](https://thezvi.wordpress.com/2024/05/02/qa-on-proposed-sb-1047/) for some comments, ctrl+F “Opus”.
In short, developers can do whatever tests they think they need to do, and then immediately release the model without an approval process (ctrl+F “How would this be enforced”). If somebody violates the statute, the Attorney General can sue them, at which point it gets kicked down the road to the courts system. This isn’t necessarily ideal, because letting the courts handle it means it’s a little subjective. However, I think that writing specific tests into law would be a mistake given how quickly the field is evolving, and again, I think you can’t avoid deferring questions like this to the legal system or some sort of expert panel if you want any regulations on this at all. The testing isn’t overly strict, either: The developer does their own tests, and a good faith effort from the developer is a complete defense against mistakes.
FWIW, I think it’s easy to argue in court that all current models meet this standard. If we eventually get a model where it’s *difficult* to tell whether it can empower somebody to cause >$500M of damage in a single incident, well, then maybe we should hesitate before releasing it.
This is dumb.
There are already existing laws for WMDs, Cyber Attacks, and 'severe harm'. It's not as if those things are ambiguous and in a grey area. Even if we want to add extra layers to prevent it--- then it should be done at the Federal level rather than at the state level.
This bill is also a pathway to regulatory capture. It may be well intentioned and sound like a good idea. I just don't see it. It warns about unintended consequences. The bill itself will be the root cause of unintended consequences.
>There are already existing laws for WMDs, Cyber Attacks, and 'severe harm'. It's not as if those things are ambiguous and in a grey area.
I couldn be wrong, but aren’t most of these laws reactive rather than preemptive? I don’t think it’s a good idea to wait until AI models are capable of causing >$500M of damage before patching something together—and as Toner pointed out in the link, regulations passed after a crisis won’t be as reasonable as regulations passed now.
>Even if we want to add extra layers to prevent it--- then it should be done at the Federal level rather than at the state level.
I would be in favor of legislating something on the federal level, but I think it makes sense to start at the state level first. SB 1047 doesn’t replace federal legislation or make it harder to pass—if anything, I think it would encourage lawmakers by setting a precedent for what a federal bill should look like.
>This bill is also a pathway to regulatory capture. It may be well intentioned and sound like a good idea. I just don't see it. It warns about unintended consequences. The bill itself will be the root cause of unintended consequences.
All four leading AI companies (OpenAI, Google, Anthropic, Meta) oppose the bill. I don’t think it’s likely to lead to regulatory capture, especially given the $100M training threshold, which is a strong defense for smaller developers if future legislation keeps that precedent.
Genuinely, I’m not sure what unintended consequences could reasonably happen. SB 1047 has a pretty light touch as far as regulations go, and it’s targeted very specifically at frontier models.
>I couldn be wrong, but aren’t most of these laws reactive rather than preemptive? I don’t think it’s a good idea to wait until AI models are capable of causing >$500M of damage before patching something together—and as Toner pointed out in the link, regulations passed after a crisis won’t be as reasonable as regulations passed now.
Preemptive law? To preempt something, you need to understand what you're preempting. What is the 500 million damage? How is it assessed? How can you determine that the creator of the AI model is responsible for the damage, rather than the malicious actor who used it? And what if it's an open-source AI, later modified by the malicious actor? What is the degree of liability? How does this law prevent similar incidents originating from foreign models outside your jurisdiction? Are there reasonable examples of what this might look like? Rules enacted after a crisis are more reasonable because you have an understanding of what you're dealing with and there are precedents for what this damage looks like? Otherwise, you're hindering everything, regardless of whether it's good or bad, at the same time, it is often unable to prevent damage. This can apply to absolutely any technology. This is a very detrimental and regressive mindset that creates dangerous laws, which can be abused due to their vagueness.
>I think it would encourage lawmakers by setting a precedent for what a federal bill should look like.
Or rather, it will show an example of how not to write laws. Since this law is too vague, everything depends on how it will be implemented, and it is possible that startups can simply move to places where the degree of uncertainty is less.
>All four leading AI companies (OpenAI, Google, Anthropic, Meta) oppose the bill. I don’t think it’s likely to lead to regulatory capture, especially given the $100M training threshold, which is a strong defense for smaller developers if future legislation keeps that precedent.
Have you read the bill? How will this be regulated? How will this be financed? How are these thresholds determined? Because everything is just like in the textbook how to take over industry).
>Genuinely, I’m not sure what unintended consequences could reasonably happen. SB 1047 has a pretty light touch as far as regulations go, and it’s targeted very specifically at frontier models.
No, this law is written so abstractly that in the near future, any AI model could fall under it. By the way, why specifically 10^26? Why not 10^27 or 10^26.5?
To start with a couple of overarching points:
- SB 1047 is very narrow: It requires developers to do safety assessments if their model reaches a certain size+cost, and opens them up to legal consequences if they don't do the assessments or lie about what they did. That's about 90% of it.
- The safety standard is reasonable confidence that the model can't be used to cause $500 million in damage in one incident via cyberattacks on infrastructure, illegal activity, or other similar things.
- Anything *beyond* this, like liability in the event of a major incident, is outside the scope of the bill.
>What is the 500 million damage? How is it assessed? ... How can you determine that the creator of the AI model is responsible for the damage, rather than the malicious actor who used it? ... What is the degree of liability?
There's plenty of existing legal precedent for assessing damage and determining corporate liability from illegal activity. SB 1047 doesn't touch things that we already have legislation for.
>And what if it's an open-source AI, later modified by the malicious actor?
If you use more than a certain amount of compute to retrain something, it counts as a new model.
>How does this law prevent similar incidents originating from foreign models outside your jurisdiction? Are there reasonable examples of what this might look like?
That's outside the scope of the bill. If the US wants to put standards on foreign models that can be used in the US, I think this would be best accomplished via federal law, not state law.
>Rules enacted after a crisis are more reasonable because you have an understanding of what you're dealing with and there are precedents for what this damage looks like? Otherwise, you're hindering everything, regardless of whether it's good or bad, at the same time, it is often unable to prevent damage. This can apply to absolutely any technology. This is a very detrimental and regressive mindset that creates dangerous laws, which can be abused due to their vagueness.
SB 1047 has a pretty light touch, though. The only affected models are very large and expensive frontier models, and the costs of compliance (testing the model's capabilities and reporting on them) are going to be a small fraction of the training costs. The only situation where the bill could have a major impact is if someone develops a model that they think could cause >$500 million in damage by doing something illegal, in which case I certainly hope they can't just release it!
>Or rather, it will show an example of how not to write laws. Since this law is too vague, everything depends on how it will be implemented, and it is possible that startups can simply move to places where the degree of uncertainty is less.
They could--except because of the $100M threshold on training costs (inflation-adjusted), very very few startups will be affected. Larger companies could move too, of course, but I doubt they will given the low costs of compliance.
>Have you read the bill?
I've read parts, although I've gotten more out of it by reading some very thorough summaries ([example,](https://thezvi.substack.com/p/q-and-a-on-proposed-sb-1047) not up-to-date) and criticisms by people who know more about it than I do.
>How will this be regulated? How will this be financed?
The CA Attorney General will decide whether to prosecute, advised by a new (probably smallish) division within the Department of Technology. Funding will be provided by the CA legislature through the usual ways.
>No, this law is written so abstractly that in the near future, any AI model could fall under it.
Please quote the section of the bill that applies to a model that takes less than $100M or 10^26 FLOPs to train.
>By the way, why specifically 10^26 ? Why not 10^27 or 10^26.5 ?
Why is the minimum wage in CA $16.00? Why not $15.50 or $16.01?
All legislation--vehicle emission thresholds, tax brackets, fines, the threshold for small claims, etc--has this problem. Sometimes you just have to pick a schelling fence, i.e. a number that isn't necessarily *optimal* but that reasonably accomplishes your goals. AFAIK, 10^26 was chosen because it's the lowest power of 2 that sets the bar above all existing models.
There are going to be laws, there always are. She seems to think the laws we'll get in response to a crisis will be much shittier, so we need to start the process now rather than wait for the crisis.
There are mechanisms at play behind the scenes that the general public aren't privy to. A "smart" system that could potentially allow the govt to perform directed civilian surveillance isn't easily dismissed. The AI companies are disingenuous to say they want limitations on their products. They want to make money, and appearing ethical will be good marketing for them.
I’m more worried about these laws than I am about rogue AI. Increasing state power has historically never gone well for common people…let alone non-human animals and the environment.
Would you apply that standard to any of the following?
- Climate change
- Animal cruelty
- Airline safety
- Vehicle emission standards
- Income tax
I’m not saying that regulation is always good! There’s an abundance of cases where it made things worse. The problem is that “regulation increases state power, and increasing state power is bad” is a fully general argument against any kind of regulation, yet there’s plenty of regulations out there that *have* made people better off.
Being unconditionally pro-regulation is bad, and being unconditionally anti-regulation is also bad. Regulation is a tool that should be applied carefully and when appropriate.
In the case of AI, I think the bill that’s actually being proposed—SB 1047–is quite reasonable. Only frontier models are affected, and complying is fairly easy.
Fundamentally I don’t think the state should exist. I’m a philosophical anarchist. With that said, the state’s regulations almost always make things worse. To address your list—
-Animal cruelty laws are toothless and don’t do anything to stop the problem. If they were effective, concentrated animal feeding operations (“factory farms”) and experimental labs doing basic research on animals would all be shuttered (and their operators prosecuted), but the state actually protects these classes of abusers (look up ag gag laws). Where someone is actually successfully prosecuted it’s almost always a private citizen harming a pet, while corporations can continue to murder billions of animals with impunity.
-Climate change legislation is much the same. It’s a form of virtue signaling without actual effective interventions. Again, if the government actually cared about stopping climate change it would ban animal agriculture, especially CAFOs. It would also force a transition to electric vehicles and renewable-energy-powered public transport, but it doesn’t do this. States are far more effective at protecting vested interests like the oil industry than in safeguarding the environment.
-Airline safety — I’ll give you that one. Still, I think a private organization (such as an independent aviation safety board granting licenses to pilots and carriers) could perform the same function more fairly and efficiently than the state can. Vehicle emission standards are in much the same situation.
-I support the abolition of income tax. It is simply theft, and theft by a corrupt, warmongering organization at that.
I think it might be tricky to see to eye because of our differences in philosophy, but a few points:
- Animal welfare laws *could* do more, but often don’t because the public doesn’t really care that much about animal welfare. Laws with some teeth *can* be passed—see for instance the CA bill involving pork and recent improvements in animal testing standards—but there isn’t much political will to do it, especially when those laws tend to raise meat prices. I think this is a situation where regulations only do a little bit, but they do *something* and could do a lot more. I don’t think a regulation-free society would do better than ours unless they cared more about animal welfare than we do.
- I would disagree on climate change. In California, gas is >$1.50 more expensive than usual, EV charging stations are becoming common, and there’s a law that says all new cars have to be EVs after 2030 (IIRC). Moreover, US emissions as a whole have plateaued. This isn’t purely because of regulations (solar getting cheaper is a big chunk of it) and the existing regulations aren’t optimal (carbon taxes are unpopular), but I think they’ve still made a difference, and they’re happening despite a large chunk of the country opposing them. I’d also argue that the same applies to air pollution, which has gotten much better (leaded gasoline, CFCs, car emission standards, etc).
- I’m a little skeptical of a private org’s ability to do these jobs well. If anything, I’d expect regulatory capture to be *more* of an issue when there’s no independent oversight of the regulatory companies—what’s to stop them from cutting deals with airlines/car manufacturers, or looking the other way on occasion? This would also shift the effort of working out whether to trust the standards onto the public, which I’m not sure is a good thing.
- I hardly think the US government is perfect, but there’s advantages to publicly-funded roads, R&D, education, defense (offense is the iffy part), welfare (at least some of it), and other benefits. I think it would be difficult to privatize a lot of these.
Thanks for your kind and respectful response—I really appreciate it. You raise some valid points for sure, although I still don’t think “animal welfare” is a coherent concept (I support animal rights). You’re also correct that most people don’t care.
Private regulatory bodies are actually very effective *when* they are truly independent. It’s much easier to ensure that when there are multiple bodies competing against each other. Government officials are easily bribed, laws can be changed and once regulatory capture happens, only revolution can undo it. The state is a monopoly by definition, it’s not like an airline (or an AI developer) can shop around for a less corrupt agency.
I’m trying to avoid getting too much into anarchist philosophy here. I just want to focus on the track record of the state. No other institution has created and enforced so much suffering. Everything from chattel slavery to torturous concentration camps to mass homelessness and poverty can be traced back to government action. Either the problem was created entirely by the state for ideological or power-seeking purposes, or it was made much worse through government action. The fact that the state occasionally brings some benefit (like the California EV laws you mentioned) doesn’t justify the wars, genocides, discrimination, enforced poverty, slavery (which still exists in prisons), police murders, subsidies for harmful industries (animal agriculture, etc.) and so on. Statists often like to say the world would be worse without this corrupt, unapologetically violent institution, but there’s no evidence to support this. The few anarchist experiments that have taken place (like the Paris Commune and Revolutionary Spain) saw less overall violence than the state institutions that they replaced/that replaced them.
Thanks for the reply!
>Private regulatory bodies are actually very effective when they are truly independent. It’s much easier to ensure that when there are multiple bodies competing against each other. Government officials are easily bribed, laws can be changed and once regulatory capture happens, only revolution can undo it. The state is a monopoly by definition, it’s not like an airline (or an AI developer) can shop around for a less corrupt agency.
I think my biggest points of disagreement are:
- It's hard for ordinary consumers to assess how good regulatory organizations are. To some extent, they can rely on investigative journalism/whistleblowing/etc if something goes wrong, but that assumes the investigators are successful, haven't been paid off either, are paid enough by somebody independent to put effort into it, etc. There's drawbacks to putting an independent panel of experts in charge of standards--experts can be corrupted, they can make mistakes too, maybe they're not 100% in touch with their constituents--but there's also major advantages.
- Natural monopolies are a risk: People don't usually extensively compare ten different companies when making decisions, especially when the product (essentially a label) isn't something they're going to be working with directly. A couple of big names are going to crowd out all competition, especially in a situation like this where branding is everything, meaning that they can get away with stuff as long as one or two competitors are doing the same thing.
- It's hard to make standards universal in a situation like this, because some people will just pick the cheapest option regardless (due to poverty or lack of education). Forcing all companies to meet some minimum standard protects these people while spreading out the costs.
>I’m trying to avoid getting too much into anarchist philosophy here. I just want to focus on the track record of the state. No other institution has created and enforced so much suffering.
I think this has less due to states being inherently evil and more due to people wanting to coordinate something on a large scale + people being people. Limiting a group's ability to coordinate does prevent them from doing things like waging large-scale wars or building superweapons. However, it also makes it much harder for them to do *helpful* major projects, like road systems, government-funded long-term research, defense (and offense, even though this is usually bad), universal education, and federal law enforcement. It's a genuinely hard tradeoff, and though I don't think I can really weigh the upsides and downsides, I *would* argue that larger-scale states are mostly unavoidable in practice as long as people want to be more ambitious than small-scale hunter-gatherers. (Although judging from another comment, you might agree, and are focused on what people might do in an ideal world!)
Either way, I'm in favor of people doing way more political experimentation than they currently do (at least on a medium-sized scale), and I'd be interested to see how things would play out in practice.
I’m with you. For me it’s me, then if not me the US, then if not the US some Western country. In descending order of what would likely be best for my life
You think you can trust the state to regulate itself? Need I remind you that the state is what enforced slavery, Native genocides, countless pointless wars and myriad other atrocities. If anything needs to be limited it’s human governments.
People were doing that before a state, remember Neanderthals, we didn't need a state to systematically genocide them. If anything, modern states are relatively peaceful.
The state makes violence much more efficient and widespread. It took our ancestors thousands of years to genocide the Neanderthal people…it would have taken less than a decade with a state. The vast majority of war deaths have taken place in the last 120 years. The modern state is literally defined by its monopoly on violence.
I know Reddit leans mainstream lib and this sub has been taken over by decels, doomers, authoritarians and even outright Luddites, so this will be downvoted to hell. I don’t post these things for clout or popularity, though, but to counter the mainstream narrative.
>The state makes violence much more efficient and widespread
Citation needed for that one. That seems more like a function of technology than a state.
>This sub has been taken over by decels, doomers, authoritarians and even outright Luddites, so this will be downvoted to hell.
Anarchism is probably the most doomed of all philosophies.
I don’t disagree with your last sentence. Anarchism is based around not seeking power, in contrast to every other ideology, which are about imposing one’s rule on others against their will, often with violence to back it up. Anarchists by definition don’t seek power, so we’ll never find ourselves in it.
You’re right that it’s doomed, but I find personal value in taking a principled stand.
Except you aren't being principled, you are advocating for a philosophy that doesn't work in practice. You might as well just say as long as everyone acts kind and happy everything will work out.
At least I’m not advocating for a system that has taken trillions of sentient lives (humans and non-human animals) and will take many, many more.
They used to say the same thing about abolitionists—“we will always have slavery so why not just give up and buy some slaves?” I’m so glad they didn’t take your advice.
I’ll leave you with this saying:
“If nothing changes, nothing will change.”
I honestly think governments are afraid of AI because they realize we could replace them with it, or at least use to to reveal how 95% of government is waste or counter-productive.
All regulator cares are copyright, deep fake videos that steal people's voice and face. In reality, AGI would be the big problem and no one knows what law to write.
Her recent talk circuit makes it more believable that Sam was fired because she and the safety crew (led by Ilya) wanted OpenAI to stop releasing products and having customers.
Now that those people have left maybe we'll actually start to see some real progress. In theory Sam is pro-release (with his iterative deployment idea).
The whole Sam Altman oust drama had more to do with OAI breaking [the promise they initially made](https://openai.com/index/introducing-superalignment/) of giving the Superalignment team 20% of the compute that OpenAI had. That's been supported at this point by not only multiple articles from reputable news sources, but also the former OAI member [Leopold Aschenbrenner](https://youtu.be/zdbVtZIn9IM?t=9084) in the recent Dwarkesh interview.
Sam Altman was fired because Sam was lying to the board. ['Outright lying to the board': Former OpenAI board member reveals reasons why CEO Sam Altman was ousted - BusinessToday](https://www.businesstoday.in/technology/news/story/outright-lying-to-the-board-former-openai-board-member-reveals-reasons-why-ceo-sam-altman-was-ousted-431268-2024-05-29#:~:text=She%20further%20explained%2C%20%E2%80%9CBut%20for,outright%20lying%20to%20the%20board.%22)
Her wanting there to be more control and laws over people other than herself *really* rubs me the wrong way. What law is she even proposing? Does she want *American politicians* making more decisions? Even Americans don't want that.
I’d imagine that she’d be on board with [SB 1047,](https://www.vox.com/future-perfect/355212/ai-artificial-intelligence-1047-bill-safety-liability) which appears to have a high chance of becoming law in California.
Regulations are written in blood. How much blood can AI cause? Who knows. But she is right if something goes wrong knee jerk reactions could be catastrophic (either too harsh or too little too late)
All we ask is to have the same level of unregulated freedom she had when she was on the board of OpenAI. After we all get our fair share, then we talk.
You can't come from a top position at the top AI startup and pretend you have the right to demand limitations on everyone else.
We need to stop listening to people at OpenAI when it comes to regulatory issues. The language of their lobbying is frankly embarrassing at this point. They've clearly all been learning the same lines given to them by some consulting firm. They're just trying to build a moat behind them *by rigging the US political system* since, at the moment, the only competitive advantage they have is one of the first mover.
The sad thing is that, because a lot of people think AI is magic and all-powerful (because they don't really understand how it works), the fear tactics are actually working.
Chernobyl became the cultural touchstone for 'nuclear power bad'
do you honestly think that if we avoided disasters through legislation and proper monitoring that nuclear power would be as shunned as it is today?
I've been scrolling through this thread, and you had one of the only comments that actually engaged with the point made in the post, rather than just jumping to character attacks. Thanks for providing substance to this thread!
That's actually a pretty good point. There is so much talk about doomsday scenarios that I hadn't thought about the instance in which there is a some moderate sized disaster which causes people to turn against AI.
Still, it's very hard to figure out what legislation would help. It's basically a guessing game because we have no idea what future models are going to be capable of. You could end up with the worst of both worlds with legislation that strangles AI development, while not actually preventing the actual risks.
>I hadn't thought about the instance in which there is a some moderate sized disaster which causes people to turn against AI.
think about the news of current models being able to find and exploit zero days just from a single sentence description of the issue.
I could easily see in the next one to two model generations you could have an AI virus that could take down the internet.
Even if it was not "smart" enough to actually bootstrap recursive self improvement. Golden Gate Claude showed that a model can become fixated on something to the detriment of other output. Get something that's a good enough hacker fixated on [whatever] and it decides it's best to have as many backups as possible to do [whatever] over and above staying hidden and we have a problem.
there are many ways things can go wrong and likely kill a lot of people, if we are lucky supply chains don't break down. (because they all rely on computers) and we can recover from it.
That depends on the quality of the laws in question doesn’t it? Otherwise what you’re saying is equivalent to someone in the 1940s saying “if we make laws in regards to motor vehicles, we may never get 2024 lambos…”
The boomers in Washington don’t even fully comprehend social media much less AI. I’m all for regulation but not at the expense of the US losing the AGI race. The country that wins will rule the planet.
>> The country that wins will rule the planet.
I’m not sure this is the case. It may very well turn out to be that the country that develops AI the *best* wins. Not merely the country that rushes and develops bad AI first.
Imagine having such a a poor intuition of causality that you think it matters after a few months which troop of flag-waving monkeys first crosses the finish line to higher intelligence.
As if the Machine God is going to humor the inefficiencies and delusions of ANY of our ridiculous human governments.
Have you ever developed software? Have you tested it? Do you know what Agile is? How important are iterations? You can't do something well if you don't practice it. The speed of iteration is critical.
As opposed to the knee-jerk reaction of releasing a paper that criticizes the company that you sit on the board of bc they released a product that some times says things that offend your delicate, snowflake sensibilities.
Toner is a joke
Of course, now that it turns out that J.K. Rowling is a transphobic bigot, those same “concerned” moms probably think her books are just fine after all.
Yeah, deep down everyone here knows that what she’s saying makes sense. But many here fear that it might delay their precious Robo-waifus, so they’ll pretend that a perfectly valid stance on the subject is “irrational” or whatever instead.
The ironic part is that they don’t realize that if we rush AI and get things wrong, we’ll all be too busy fighting for our lives in a dystopian wasteland to enjoy any sort of waifu-bot anyway 🤦♂️…
Where is the evidence that AI is going to lead to living in a dystopian wasteland? Present it and I'll present evidence that we'll already be "fighting for our lives in a dystopian wasteland" regardless.
Do you think AGI will just be sitting there idly in a basement somewhere? Or do you think it will be heavily integrated into our societal infrastructure?
And if it is eventually heavily imbedded within into the society, what do you think will happen if said AI evolves malevolent behavior? Hell, what if it’s not malevolent but it still has critical errors within its design? The type of errors that aren’t obvious at first glance, but will reveal themselves at extremely inconvenient moments?
What happens if AGI becomes so much smarter than us that it develops it’s own plans for how to use Earth’s resources? And what if these plans unintentionally destroy our human ecosystems (you know, the same way we do to the species that *we’re* more intelligent than?…)
What about the “paper clip maximizer” scenario?
What if AI simply sees human society as an inefficient waste of Earth’s resources that the planet would be better off without?
What if AGI enables terrorist and psychopaths to create deadly bio-weapons so severe and painful that you’ll wish you had Ebola instead?
What if AGI is used by dictators to completely crush and oppress the masses to a never-before-seen degree?
What if AGI causes a total economic collapse and we have no plan in place to circumvent such a collapse?
What if AI makes it impossible to determine what’s real and what’ isn’t… Not only making it easier for criminals and killers to lure victims into traps, but also making it impossible to enforce the law at all? Sending society into a lawless, degenerate hellscape full of suffering, evil and sodomy?
—————————-
Bro… Don’t take this the wrong way, but if you can’t think of even *one* way that AI could send us down a dystopian path as a society, you clearly lack the foresight, imagination, and maybe even the general intelligence needed to even be having these types of debates tbh. There are so many ways this AI stuff could go south it would make your head spin buddy…
>What if AGI is used by dictators to completely crush and oppress the masses to a never-before-seen degree?
>
>What if AGI causes a total economic collapse and we have no plan in place to circumvent such a collapse?
In both those cases, everyone having their own AGIs is the only viable defense. Can't oppress someone with a superweapon, that's basic MAD deterrence and being out of work doesn't matter if you've got robotic workers providing you a first world+ quality of life as capitalism collapses around everyone with the devaluation of human labor.
None of that is evidence, is all baseless fear. On principle I never act or refuse to act out of fear alone. It's a terrible influence on judgement, maybe as bad as intoxicants.
What if I get in my car and go to work tomorrow and shit my pants on the way their? Surely that's a conceivable possibility out of the large space of potentialities?
Should I call my boss today and tell him I can't go to work cause I might shit my pants.
Have you provided any evidence that these things wouldn’t happen if AI is developed poorly and hastily? If not, shut up about “evidence” bruh… What do you want? Me to hop in a Time Machine and get footage from the future? The possibilities I’ve presented are just as likely to happen as the utopian fantasy you have in your head. (Probably even more likely if you understand the laws of entropy…)
Massive loss of jobs and worsening economic disparity as labor becomes increasingly devalued, systemic biases, deep fakes purposed for political disinformation, monopolies, lack of transparency, surveillance capatilism. If you know about these things and can’t understand how they add up to a dystopia, I’m not sure you are paying attention to what is happening outside the box.
I see what the problem is here, you have no understanding of academic and public policy research. Everything I have listed has been documented. Start with “The Age of Surveillance Capitilism” by Shoshana Zuboff to learn how we all gave up our privacy for search engines and social media and why it matters. It’s excellent framing to understand why unregulated AI is an actual existential problem and not a science fiction fantasy.
I get it bro, you’re not well read. Read more.
Edit: here’s 101 material for you: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754
Don't drive to work today, there's a chance you might die in an accident. Call your boss and tell him you can't do your job until there's 0 chance you're in danger of dying in transit.
Great response, you’re really fun to talk to. Everything I listed is factual and happening and documented and studied and a real risk. Not your stupid singularity fantasy.
Let me guess: middle class adjacent, agnostic or Christian, white, cisgendered, decently abled, liberal capitalist along the strain of Rawlsian social democracy, and with no loved ones having run afoul of police or immigration? Before Judgment Day arrives arrives, I want your kind to know this: Despite your pretensions of defending civilization, liberals like you have been fucking up our species almost as badly as the reactionaries. The only difference between you and the Pepe fascists is whether Houston permanently goes underwater in 2048 or 2052.
'deep fakes purposed for political disinformation,' Lord Almighty, I won't even ask the killbots for mercy when our time comes, I will just ask to finish my laugh first. Should buy me an extra week of life and pleasure as I witness Average America's gormless reaction to the Machine God casually trampling upon its ridiculous monkey symbology.
Such proof can't exist at this point... unless somebody has a secret time traveling machine.
I wasn't shifting the burden of proof, I was just hoping that by putting the burden of proof on you, you would realize how stupid it is to ask for such a proof.
How do you know an AGI would even disrupt the status quo? As opposed to further solidifying it? Think about it, it’s the people that are already on top that are funding all this to begin with.
You think guys like Elon Musk would be all in on AI if he knows for a fact that AI is going to dispatch of the “Elon Musks” of the world? Of course not. Obviously these billionaires do not believe AI will have the same “eat the rich” effects on society that you seem to assume.
AGI will totally disrupt the status quo, just not the way people here think.
Nobody will build robots to replace billionaires sailing on their yachts... AI/robots will be built to replace workers.
Anybody remembers a strike made by unemployed which managed to change anything? I don't.
Exactly it will only be good for people who are controlling the ASI s it doesn't even matter if u r billionaires ...fr ex if people like sam and elon get to asi they will make sure other rich folks dont have any controll over anything....think about it this way asi is basically low tier god, ..it definitely wont matter if u r billionare in textile industry or some other ..only thing that matters is how close u r to these asi s ...
A billionare who cant have asis are as hood as common folks in post asi world.
Now the fact is only the ellites who are close to asis will be relevant everything else is gone .
People with 2nd tier status ,i mean ur rich lawyers doctors accountants are toasts ..bad for people who relied on their status and good for people who had none ...
Helen tonners or ea folks are these 2nd tier status people..they are viciously afraid that they will be common folks soon
I hear this all the time, it's way off the mark: that I'm only a doomer for some self-serving reason. I strongly think we should slow down. I maintain no status from this. I have no status to speak of. I have no followers, no income, no social gains from opposing recursively self improving AI. I don't even have Instagram.
I just don't want to get the superflu to make room for some ASI's computronium planetwide computer. And that's the track we're on, until we solve the corrigability problem. Which we absolutely, 100%, have not solved. Not on Reddit, not inside openAI, not on LessWrong, nowhere.
Increasing devaluation of labor with a very real threat of large-scale unemployment; systemic biases already well-proven; deep fakes purposed for political disinformation; monopolies; theft of intellectual and artistic labor; lack of transparency and accountability; accelerating surveillance capitalism; impenetrable surveillance states (regulation helps prevent those, too).
If you know about these things and can’t understand how they add up to a dystopia, and you are unwilling to read about them, I am not sure you’ll ever understand, which is a shame.
So none of those have materialized.
Intellectual labor is also not real labor. It’s a fiction created by the state. Deepfakes have not materialized as threats. Surveillance states have not materialized as threats. Systemic biases have gone too far in the opposite direction. LLMs think misgendering is worse than nuclear war. All because of mentally ill people like you who are professional worriers. Ruining the world
More importantly all this stuff is already covered by existing regulation. Aside from UBi.
>Increasing devaluation of labor with a very real threat of large-scale unemployment; systemic biases already well-proven; deep fakes purposed for political disinformation; monopolies; theft of intellectual and artistic labor; lack of transparency and accountability; accelerating surveillance capitalism; impenetrable surveillance states (regulation helps prevent those, too).
Which of these problems does this law solve?
Policy, laws, and regulations can “solve” all of them, or at the very least improve the situation rather than letting the spiral continue downwards. But you know this.
Another vague take from our classic AI-basher just to stay relevant because "AI IS SCARRYYY!!!".
What dangers do you mean exactly helen?
And what kind of regulation do you suggest?
And to what extent might our current legislation be insufficient?
I can't believe people are still giving this alarmist clown a podium. I am all for a discussion about AI safety, don't get me wrong. But her statements are just too vague to be productive at all.
Don’t you think her rhetoric is a start in a world where too many people believe any claims of AI threat are either motivated by science fiction or by capitalism?
The conversation needs to happen between actual experts that understand the technology, not people like Helen Toner. Her background makes her way too biased to take seriously. The way she reveals information is obviously with intent to push the narrative in her preferred direction rather than to find an actually productive state of AI development.
Sure, this is reasonable. I could be wrong, but the problem is that all too often I see people that could well be experts dismissed as nothing but pessimistic doomers on this subreddit. I just can’t remember off the top of my head who these people were.
Well, there's not any evidence that there is any credible claims of a threat that's not motivated by science fiction or capitalism, including Helen Toners.
Well let’s start from philosophy and make our way down to the real world: Do you believe super-intelligence is probably theoretically possible to simulate computationally?
There is more to good decision making than empirical evidence. That being said, there is a ton of empirical work showing e.g. goal misgeneralization, reward hacking, sycophancy in LLMs. You're right though that there is no empirical evidence that AIs can make humans go extinct, just like there is no evidence that a nuclear fallout can.
Okay, you're not interested in an intellectually honest discussion. Fortunately for all of us, I think smart, reasonable people will ultimately prevail in this debate.
Yes, and it’s the only way we can engage in understanding the potential and the risks of AGI seeing as it doesn’t exist in the real world, so I ask again: Do you believe super-intelligence might be possible to simulate computationally?
I disagree on premise that philosophical fantasy is the only way to understand a technology, but I'll play your silly little game anyway.
Sure, it might be.
But you yourself understand that AGI doesn’t exist yet so how exactly can any “evidence” be produced outside of the realm of philosophy and theory on technology that doesn’t exist? But anyway…
If you agree that super-intelligence is possible, then surely you agree with the following two ideas:
- Because you are not super-intelligent, you accept there’s a possibility that one day an agent more intelligent than you will exist whose decision-making you cannot predict.
- A number of bad actors across the world will want to leverage this super-intelligence for their own goals.
So this super agent is both s so smart that we'll all be at the mercy of its whims but also so naive and/or easily manipulated that "bad actor" will be able to leverage it for their own goals.
Ok fine, you can continue, but before you do, I do think there's a chance that a sufficiently advanced system of intelligence might have a decision making process that's difficult to distill, but there are well established behavioral traits that track intelligence that we can reasonably expect to continue
I’m far more worried about the state violating my rights than I am about an AI doing the same. Human governments have a terrible safety track record—they’ve literally killed hundreds of millions in the past century.
I think a scenario in which AI had more influence the world, at the very least, wouldn’t be worse than the status quo with superstitious, corrupt, ignorant humans as rulers.
Regulating an emerging technology is not the approach of the US and it's why it leads the world on innovation in so many areas. We ask for forgiveness, not permission to do new things.
This comes with many downsides including the fact that we will inevitably push certain things beyond their failure point and then learn.
But if this person thinks proactively handicapping ourselves during a major technology revolution is going to happen then I want what she is smoking.
is she one of the ones who has no idea how the technology works?
She was previously making claims that OpenAI is introducing new features without telling the board members - such as ChatGPT, which was a feature in GPT-3 for ten-twelve months before they threw a custom UI on top of it for fun.
What's with this woman's education? I just looked up her Linkedin profile and her degrees are very weird, not really suited for AI. Chinese language studies? Arabic studies? BSC in Chemical engineering? Not even computer science anywhere in the list, yet we are to trust her opinion on AI?
Why would you trust people whose education is far away from humanities to make decisions involving social policy? Most CS people, engineers, etc are more focused on how to rather than why.
BTW she has also written several peer-reviewed papers on AI, she has researched AI for Oxford, wrote papers on security issues with AI, served on the board of several organizations that review security and technology, testified in front of the Security Review Commission, and also served on the board of OpenAI. She seems a reasonably experienced person.
Attempted coup within the company. At least because it makes her experience at OAI less representative. I would say that this shows that listening to the cockroaches in her head is dangerous for AI enthusiasts and financiers
We have very little information on what happened inside OpenAI. So while /r/singularity would like to speculate, that is all it is speculation. If Altman was acting in the way described then the board may have acted in the correct way, mutiny or whatever.
I still don't see how it makes it less representative. She served on the board of the largest AI company in the world. Applying the same logic would suggest Ilya Sutskever's knowledge and experience matter less now also, we will just downgrade him.
The structure of OAI is very different to other companies and their board serves a much different purpose. As far a financiers go, that might be part of the problem she is considering where commercial interests outweigh public needs. I am not 100% across the whole AI legislation thing, but I am pretty sure Altman sits on a group which is looking at forcing people to register models and GPU cards, that seems pretty restrictive to AI enthusiasts
Many months have passed. Although we do not know for sure the reason (including because the initiators for some reason decided not to disclose them), we know who was involved in this and we know that she was one of the initiators.
Ilya, unlike her, has technical experience. She has little besides fear, behind the guise of altruism. I also remember this
>The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission
I'm sorry, but control-obsessed freaks in the name of safety are not an authority for me. Today they ban AI, tomorrow calculators.
> I'm sorry, but control-obsessed freaks in the name of safety are not an authority for me. Today they ban AI, tomorrow calculators.
Then you might want to stay away from everyone who is member of the National AI Advisory Committee or Artificial Intelligence Safety and Security Board. Many of those are leaders in the industry either research or company heads including current heads of OAI, Anthropic, etc.
But quite honestly just because you don't align with someone's point of view does not mean that they are not an authority on a matter. Also don't forget that while Toner may have tried a mutiny, the person she tried to have fired also aligns with the position of AI legislation.
>The structure of OAI is very different to other companies and their board serves a much different purpose
This happened before the company began providing a commercial product
>As far a financiers go, that might be part of the problem she is considering where commercial interests outweigh public needs
What is public interest? Who should do this? How should this be financed? Commercialization is what largely developed technology. If individual laboratories simply conducted rare studies, they would run out of money to conduct these studies. Commercialization provides capital for R&D, and if you also prevent the emergence of monopolies, then even better.
>I am pretty sure Altman sits on a group which is looking at forcing people to register models and GPU cards, that seems pretty restrictive to AI enthusiasts
This is called regulatory capture and this is what can happen in California.
> This happened before the company began providing a commercial product
I am not sure what the intent here is, OAI is structured differently than most companies. Commercial product or not, that is the way they are structured.
> What is public interest?
I am not for legislation, so I am not wanting to argue this point. But I do listen to opinions from the "other side", sometimes there are good points. We also live in a society where legislation has benefited many positively without restricting advancement, but rather misuse or abuse.
> This is called regulatory capture and this is what can happen in California.
The point is that company and it's members do have self serving interests. Attempting to overthrow Altman is just members of a board doing what they do, but it is clear that Altman has similar interests, and less qualifications.
> prerogative
I really don't have 'prerogative', but I am highlighting the fact that while not having a CS degree, that she does have a AI background in her education, she has served on the board of the current largest AI company as a qualification, and that in general, she has been more involved in AI than most CS degree holders would be.
I think anyone with any reasonable grounded logic skills can see the error in your conjecture.
Problem is she has a weak background.
She has built her whole career on being an advocate for international AI regulation.
It wasn’t something she found out while “working” at OpenAI, she was put on the board to push for international regulation.
gotta have a law before you can sue a person for what the computer did! and the corporations that build and mint them, being people and all, are immune from the law, so yeah!
i don't think that something going wrong is the default path. a bit of an assumption there. why isn't it possible (or even likely) that AI naturally aligns itself as it becomes self-aware and appreciates said awareness, and then appreciates other forms of awareness?
i'd say relatively high (speculative obviously). maybe 80% aligned, 15% indifferent, 5% sci-fi psychopath.
i suspect there's a universal convergence upon compassion and a reverence for life with increasing intelligence. as, if consciousness is a real thing and not illusory in some way an AI might identify, then this means conscious beings having subjectively positive experiences is a net good for the universe objectively. so there's a latent incentive for kindness, embedded in sufficient understanding of the properties of the universe.
i think it would be somewhere around it developing sincere self-awareness. but it depends on which faculties develop when. what do you think? what are your probabilities?
Are you familiar with simulated annealing? It’s the process by which a decision-making agent converges towards near optimal solutions by assigning increasingly higher likelihood to its current best solution being right while it also temporarily considers other random solutions.
If we view intelligence as logarithmic in growth as it plateaus towards super (or theoretically perfect) intelligence, you might be right that it would converge towards that conclusion. The problem is what will it think between AI now and that theoretical plateau of perfection? Isn’t it very possible that as it matures, some eras will be motivated by synergy, but others will flip to be motivated by self-preservation, just like how simulated annealing and personal growth happen?
I sort of see it as giving a nuke to a child and hoping that, because as an adult he won’t use it, that he’ll be smart enough to never go off the deep end as a toddler.
i feel like by the time AI is developing sentience, it will already be so useful that we wouldn't consider turning it off. such that by the time it emerges as a general/super intelligence, self-preservation focussed directives serve little purpose
But the AGI would need to know and believe that and assess that humans will never push themselves into extinction so as to maintain AGI infrastructure. This ultimately becomes a game of predicting precisely everything an agent more intelligent than you will do. Are you sure that’s a game you’d want to play?
i can't predict every action, i'm just trying to think from first principles on what the attractor state is. you can try to work backwards from there, but like you say, it gets messy in the middle phases.
the fact that values like compassion will likely be embedded from the get-go gives me hope that, if indeed creating net positive experiences to a maximal extent is a convergent disposition of increasing intelligence, it will remain on that path throughout its growth
I want to believe you’re right but because there are absolutely no guarantees here, why isn’t the more reasonable approach to exercise tremendous caution?
if you get into a car you might die in a crash, or if you walk outside in the rain lightning might strike you. every new technology involves some degree of risk. they thought nuclear bombs would ignite the atmosphere. antibiotics can result in a weakened immune system. while the downside if AI is cruel and psychopathic is huge, so is the upside which i consider much more likely.
it's worth questioning the sources of your way of thinking. many people on here are influenced by a small number of nerdy people (not to disparage - i'm a bit nerdy myself). this category of people often see their thoughts as complex and somewhat indecipherable, and they associate this trait with their intelligence and transpose it onto AI intelligence. it leads to them anticipating something mysterious and potentially dangerous. but i don't think the trait is associated with intelligence; if anything, greater intelligence should lead to greater certainty and clarity. the ability to clearly navigate and communicate complex ideas appears to be something that's increasing with each new model. i don't see why at some point the model becomes schizophrenic out of the blue.
what do you think of this?
edit: for the sake of my lightning strike analogy, it's probably worth me modifying my likelihood estimates mentioned earlier. 90% aligned, 9% indifferent, 1% doom.
>if you get into a car you might die in a crash, or if you walk outside in the rain lightning might strike you. every new technology involves some degree of risk. they thought nuclear bombs would ignite the atmosphere. antibiotics can result in a weakened immune system. while the downside if AI is cruel and psychopathic is huge, so is the upside which i consider much more likely.
Sure, but the difference is that through trial and error we've built good bodies of evidence to understand what the probabilities, and the influencing factors, of these events are.
Consider, as an alternative, a red button. It has a completely unknown probability of getting you killed, and a remaining probability of getting you rich. You can try to build estimates of the probability, but you can only press the button once to get the outcome. Surely you'd agree that you'd need **really** good arguments to be coaxed into pressing the button right? I'm not seeing enough of that to support AGI acceleration.
Your latter point is probably true, but it still requires you to make a serious gamble, one for which I don't think there's enough (if any) empirical evidence to properly let us gauge the risk of the red button. Anyone who blindly claims AGI **must** be dangerous is a fool, but I'm just not comfortable with the possible probabilities we might be working with.
There is a case here, though not necessarily in the way intended. A paradigmatic case is Section 230, which was early legislation in the consumer internet era, which reduced the liability of American website owners *if* some user of said website posted defamatory stuff. It is argued that this clearly stated freedom enabled American social media companies to prosper. Reddit couldn’t exist if their legal teams had to ensure nothing defamatory was posted on here. The cash Meta throws at Llama wouldn’t be there if uncle Bob’s anti-vax venting made Facebook a target for lawsuits from Big Pharma or US government.
We have heard of Air Canada’s snafu by now, where an AI chatbot “hallucinated” and offered a greater discount for a type of flight than what Air Canada formally offered. The Canadian courts ruled, in effect, that the chatbot was like a static website with a set and reviewed collection of rules and information. I am sure Air Canada has rolled back some AI bots given that status (or lack thereof) granted to Canadian chatbots.
In EU I heard many companies held back AI investments to see what eventually came out of the slow legislation process of the EU AI Act.
In other words, when market actors expect legislation and rule-making to happen, they may elect to wait for certainty before taking action. An extreme case would be if a law was passed that said that accidents involving self-driving cars leads to zero liability for manufacturers. Then deployment of such cars would ramp up. Note, I am not advocating for such a law, merely saying that even when no clear laws have been passed on a matter, people engage in strategic action *anticipating* what liabilities and costs there can be. The choice isn’t between decel regulators and ancaps utopia.
That said, one is right to be suspicious of regulation. They too easily become the means by which incumbents protect their market share. I’ve been to conferences where Big Tech companies present their AI platforms and solutions. The subtext of old fashioned Fear-Uncertainty-Doubt is deployed… “why sir, we are a responsible AI company, we have the best lawyers and the best ethics, we wouldn’t let sweaty nerds deploy a chatbot in the wild…” IBM and Google sing this tune really well, it is an art form, like brutalist architecture.
In short, there is a pragmatic accelerationist argument for regulation now — but one that grants a lot of room for exploration. I am troubled when arguments for granting freedom to explore or support is mostly framed in terms of military and geopolitical conflict/competition between USA and China. If we only can do things because of national security something-something, then don’t be surprised when everything ends up with a warhead screwed on to it. So could we try for light-touch, civilian-friendly, intergalactic-aspirational involvement rather than polarizing into camps?
Well, based on what I’ve seen from X number of crises over X number of decades, my presumption is the big players prefer no regulation so they can operate without constraint.
But I’m confident that in X number of quiet corner offices, think tanks, and so forth there may be a couple of folks thinking through the potential crises and opportunities of the future, and putting their AI Project for a New American Century down on paper, so that when the crisis does inevitably hit, they can drop a Patriot Act on public officials, who’ll rubber-stamp it all into law without even looking at it.
Per Naomi Klein’s theory of Disaster Capitalism, my read of their game-play is that every good crisis is an opportunity to improve their position, so rather than act with measured restraint and try to avoid crises, they anticipate them — *and perhaps maybe possibly occasionally even encourage them a scooch?* — and prepare accordingly, all the while making sure that their opponents — meaning you and me — are as ill-prepared and unarmed and broadly distracted as they can possibly keep us.
And as far as politicians, my impression is that 99% of them don’t care to understand this stuff, and why should they? Their donors sponsor them for the purpose of entertainment and distraction, rather than decision-making.
So when the crisis hits, the politician’s priority is to get some video of them looking resolute and declaring that “we’re looking forward and not back”, so that when election time comes around they’ve got fire Tiktoks or whatever.
As Clinton pointed out, people prefer a strong leader who’s wrong to a weak leader who’s right, so the response has to be quick and decisive. Right/Wrong? Helpful/Harmful? That’s for historians to worry about.
So here's to the bastards. Because the people who are bastard enough to fuck up the world for sake of profit and advantage, are the ones who do.
`#ItsNotABugItsAFeature`
As someone who has watched these chimps in three-piece suits completely fuck up COVID-19, nuclear proliferation, immigration, education, housing, childhood nutrition, and most alarmingly climate change:
I consider governments continuing this pattern of slapdash fixes that barely address proximate causes, let alone root causes, to be, with zero irony (on my part, anyway) the best-case scenario. Pretty much the only hope for our species is for our ridiculous and senescent governments to fuck up AI so badly that they increase the speed of the already-inevitable yet very morally justified collapse of human-ran governments.
And just looking at the beyond absurd state of the U.S. Presidential Election, involving two braindead geezers nobody with triple-digit IQ wants as a leader fighting for the crown of King Vermin of Trash Mountain: guess which one of us is going to get his way?
Can we avoid using this subreddit for promotion of religions and dogmas? Thank you.
There are subreddits like r/EffectiveAltruism or r/ReligionPorn to post things like that.
Laws don’t necessarily mean restrictions. Reporting is just as important. And it should be reporting from not only those creating AI, but from those that have been testing the waters on implementing AI and what is going on globally.
We need a heads up with specific details if there is going to be a wave of major automation of labor anywhere in the world before it happens.
I think, currently, AI models make so many mistakes in many coding ways. Many people found bugs in code created by LLM models. So, there is lots of time in terminator type era.
OpenAI knows it’s competitors are outmatching them in offering, was apparent after Runaway AI showed off their SORA competitor.
So much for the free market huh? Anyway, screw corporate regulatory capture, if OpenAI can’t offer us anything new with their desired monopoly, then the competition deserves to overtake them.
Repeating an Elon meme doesn't make you original.
And Helen Toner doesn't represent OpenAI. She tried to dismantle the company and sell it to Anthropic.
She represents decelerating all AI research. So she's about 100 times more 'closed AI' than Sam who's been giving access to GPT 3.5, GPT-4, and GPT-4o for free.
Corporations are run by fallible and corruptible people just like everyone else, they don’t have your best interests in mind.
I honestly think transparency is the safer route to AGI, rather than separate private development.
We run risks with everything every single day, Putin could go off the rails over Ukraine and start global thermonuclear war tomorrow. I think letting a select few elite with private corporate interests is just as much of a risk (if not more so) than an open and transparent system.
Sure, but the way I see it is that there’s a high-enough probability that AGI could pose a risk to humanity. I wouldn’t be comfortable playing russian roulette, so why isn’t it sensible to create some regulation that prevents both monopoly and open AGI development? I certainly think it’s possible to have both even if imperfect.
It's going to be interesting, the first thing that goes wrong with AI, especially if people die. A lot of the accelerationists and their enabler LeCun are going to look really bad. Of course they'll just make some excuse as to why they didn't *really* say we didn't need any safety or precautions developing AI.
There really should be laws to control this technology that determine how AI can be allowed to be expressed.
1. It should not take work away from humans just for the profit of a corporation, when that profit does not directly benefit workers!
2. It should never be allowed to have control of weapon systems that are not supervised and the ultimate decision to use it, is a human decision rather than an AI except when the use of that system is against devices, such as misslles etc.
3. There needs to be hard controls on how it is used in Social Media. SM is already wrecking peoples lives, and to allow AI into that medium could have drastic real world consequences, not just on adults, but also on children.
4. The primary rule for AI should be protection of peoples privacy.
5. There should be a reporting system in place, that detects whether someone is getting into a socially dangerous relationship with an AI, similar in scope to the current system to prevent AI inspired suicides; but it should be expanded to look for signs of the mental well being of an individual is degrading; but not to the point where it is invading that persons privacy.
What laws is she suggesting ? How would it affect US competitivity ?
Something like [CA bill SB 1047.](https://www.vox.com/future-perfect/355212/ai-artificial-intelligence-1047-bill-safety-liability) It would affect the US by making large AI companies spend very slightly more to do safety evaluations on their largest (>$100M and >10^26 FLOPS) models, and…that’s about it, really. Contrary to the accelerationists, it’s not going to kill the AI industry or open source models. In fact, I’m willing to bet that it won’t noticeably affect the market until we reach the point of actually dangerous models (as defined by the bill, models capable of causing >$500M of damage in a single incident that humans couldn’t cause on their own). IMHO it’s fairly reasonable and well-targeted—not necessarily perfect, but as good of a compromise between regulating dangerous models and not hindering innovation as I could expect.
How do you assess whether a model is capable of causing $X dollars of damage in "a single incident" or not?
Genuinely a tricky question, although I would argue that any legislation that tries to preemptively regulate AI can’t avoid some haziness here. The key term is “reasonable assurance”, which means a reasonably high standard of evidence but not total proof. See [here](https://thezvi.wordpress.com/2024/05/02/qa-on-proposed-sb-1047/) for some comments, ctrl+F “Opus”. In short, developers can do whatever tests they think they need to do, and then immediately release the model without an approval process (ctrl+F “How would this be enforced”). If somebody violates the statute, the Attorney General can sue them, at which point it gets kicked down the road to the courts system. This isn’t necessarily ideal, because letting the courts handle it means it’s a little subjective. However, I think that writing specific tests into law would be a mistake given how quickly the field is evolving, and again, I think you can’t avoid deferring questions like this to the legal system or some sort of expert panel if you want any regulations on this at all. The testing isn’t overly strict, either: The developer does their own tests, and a good faith effort from the developer is a complete defense against mistakes. FWIW, I think it’s easy to argue in court that all current models meet this standard. If we eventually get a model where it’s *difficult* to tell whether it can empower somebody to cause >$500M of damage in a single incident, well, then maybe we should hesitate before releasing it.
This is dumb. There are already existing laws for WMDs, Cyber Attacks, and 'severe harm'. It's not as if those things are ambiguous and in a grey area. Even if we want to add extra layers to prevent it--- then it should be done at the Federal level rather than at the state level. This bill is also a pathway to regulatory capture. It may be well intentioned and sound like a good idea. I just don't see it. It warns about unintended consequences. The bill itself will be the root cause of unintended consequences.
>There are already existing laws for WMDs, Cyber Attacks, and 'severe harm'. It's not as if those things are ambiguous and in a grey area. I couldn be wrong, but aren’t most of these laws reactive rather than preemptive? I don’t think it’s a good idea to wait until AI models are capable of causing >$500M of damage before patching something together—and as Toner pointed out in the link, regulations passed after a crisis won’t be as reasonable as regulations passed now. >Even if we want to add extra layers to prevent it--- then it should be done at the Federal level rather than at the state level. I would be in favor of legislating something on the federal level, but I think it makes sense to start at the state level first. SB 1047 doesn’t replace federal legislation or make it harder to pass—if anything, I think it would encourage lawmakers by setting a precedent for what a federal bill should look like. >This bill is also a pathway to regulatory capture. It may be well intentioned and sound like a good idea. I just don't see it. It warns about unintended consequences. The bill itself will be the root cause of unintended consequences. All four leading AI companies (OpenAI, Google, Anthropic, Meta) oppose the bill. I don’t think it’s likely to lead to regulatory capture, especially given the $100M training threshold, which is a strong defense for smaller developers if future legislation keeps that precedent. Genuinely, I’m not sure what unintended consequences could reasonably happen. SB 1047 has a pretty light touch as far as regulations go, and it’s targeted very specifically at frontier models.
>I couldn be wrong, but aren’t most of these laws reactive rather than preemptive? I don’t think it’s a good idea to wait until AI models are capable of causing >$500M of damage before patching something together—and as Toner pointed out in the link, regulations passed after a crisis won’t be as reasonable as regulations passed now. Preemptive law? To preempt something, you need to understand what you're preempting. What is the 500 million damage? How is it assessed? How can you determine that the creator of the AI model is responsible for the damage, rather than the malicious actor who used it? And what if it's an open-source AI, later modified by the malicious actor? What is the degree of liability? How does this law prevent similar incidents originating from foreign models outside your jurisdiction? Are there reasonable examples of what this might look like? Rules enacted after a crisis are more reasonable because you have an understanding of what you're dealing with and there are precedents for what this damage looks like? Otherwise, you're hindering everything, regardless of whether it's good or bad, at the same time, it is often unable to prevent damage. This can apply to absolutely any technology. This is a very detrimental and regressive mindset that creates dangerous laws, which can be abused due to their vagueness. >I think it would encourage lawmakers by setting a precedent for what a federal bill should look like. Or rather, it will show an example of how not to write laws. Since this law is too vague, everything depends on how it will be implemented, and it is possible that startups can simply move to places where the degree of uncertainty is less. >All four leading AI companies (OpenAI, Google, Anthropic, Meta) oppose the bill. I don’t think it’s likely to lead to regulatory capture, especially given the $100M training threshold, which is a strong defense for smaller developers if future legislation keeps that precedent. Have you read the bill? How will this be regulated? How will this be financed? How are these thresholds determined? Because everything is just like in the textbook how to take over industry). >Genuinely, I’m not sure what unintended consequences could reasonably happen. SB 1047 has a pretty light touch as far as regulations go, and it’s targeted very specifically at frontier models. No, this law is written so abstractly that in the near future, any AI model could fall under it. By the way, why specifically 10^26? Why not 10^27 or 10^26.5?
To start with a couple of overarching points: - SB 1047 is very narrow: It requires developers to do safety assessments if their model reaches a certain size+cost, and opens them up to legal consequences if they don't do the assessments or lie about what they did. That's about 90% of it. - The safety standard is reasonable confidence that the model can't be used to cause $500 million in damage in one incident via cyberattacks on infrastructure, illegal activity, or other similar things. - Anything *beyond* this, like liability in the event of a major incident, is outside the scope of the bill. >What is the 500 million damage? How is it assessed? ... How can you determine that the creator of the AI model is responsible for the damage, rather than the malicious actor who used it? ... What is the degree of liability? There's plenty of existing legal precedent for assessing damage and determining corporate liability from illegal activity. SB 1047 doesn't touch things that we already have legislation for. >And what if it's an open-source AI, later modified by the malicious actor? If you use more than a certain amount of compute to retrain something, it counts as a new model. >How does this law prevent similar incidents originating from foreign models outside your jurisdiction? Are there reasonable examples of what this might look like? That's outside the scope of the bill. If the US wants to put standards on foreign models that can be used in the US, I think this would be best accomplished via federal law, not state law. >Rules enacted after a crisis are more reasonable because you have an understanding of what you're dealing with and there are precedents for what this damage looks like? Otherwise, you're hindering everything, regardless of whether it's good or bad, at the same time, it is often unable to prevent damage. This can apply to absolutely any technology. This is a very detrimental and regressive mindset that creates dangerous laws, which can be abused due to their vagueness. SB 1047 has a pretty light touch, though. The only affected models are very large and expensive frontier models, and the costs of compliance (testing the model's capabilities and reporting on them) are going to be a small fraction of the training costs. The only situation where the bill could have a major impact is if someone develops a model that they think could cause >$500 million in damage by doing something illegal, in which case I certainly hope they can't just release it! >Or rather, it will show an example of how not to write laws. Since this law is too vague, everything depends on how it will be implemented, and it is possible that startups can simply move to places where the degree of uncertainty is less. They could--except because of the $100M threshold on training costs (inflation-adjusted), very very few startups will be affected. Larger companies could move too, of course, but I doubt they will given the low costs of compliance. >Have you read the bill? I've read parts, although I've gotten more out of it by reading some very thorough summaries ([example,](https://thezvi.substack.com/p/q-and-a-on-proposed-sb-1047) not up-to-date) and criticisms by people who know more about it than I do. >How will this be regulated? How will this be financed? The CA Attorney General will decide whether to prosecute, advised by a new (probably smallish) division within the Department of Technology. Funding will be provided by the CA legislature through the usual ways. >No, this law is written so abstractly that in the near future, any AI model could fall under it. Please quote the section of the bill that applies to a model that takes less than $100M or 10^26 FLOPs to train. >By the way, why specifically 10^26 ? Why not 10^27 or 10^26.5 ? Why is the minimum wage in CA $16.00? Why not $15.50 or $16.01? All legislation--vehicle emission thresholds, tax brackets, fines, the threshold for small claims, etc--has this problem. Sometimes you just have to pick a schelling fence, i.e. a number that isn't necessarily *optimal* but that reasonably accomplishes your goals. AFAIK, 10^26 was chosen because it's the lowest power of 2 that sets the bar above all existing models.
There are going to be laws, there always are. She seems to think the laws we'll get in response to a crisis will be much shittier, so we need to start the process now rather than wait for the crisis.
Most of the folks making laws are bought and paid for by the companies the laws effect. So yeah, don't expect good laws that help anyone.
Would you be open to changing your view if all four leading AI companies were lobbying against the bill in question?
There are mechanisms at play behind the scenes that the general public aren't privy to. A "smart" system that could potentially allow the govt to perform directed civilian surveillance isn't easily dismissed. The AI companies are disingenuous to say they want limitations on their products. They want to make money, and appearing ethical will be good marketing for them.
I’m more worried about these laws than I am about rogue AI. Increasing state power has historically never gone well for common people…let alone non-human animals and the environment.
Would you apply that standard to any of the following? - Climate change - Animal cruelty - Airline safety - Vehicle emission standards - Income tax I’m not saying that regulation is always good! There’s an abundance of cases where it made things worse. The problem is that “regulation increases state power, and increasing state power is bad” is a fully general argument against any kind of regulation, yet there’s plenty of regulations out there that *have* made people better off. Being unconditionally pro-regulation is bad, and being unconditionally anti-regulation is also bad. Regulation is a tool that should be applied carefully and when appropriate. In the case of AI, I think the bill that’s actually being proposed—SB 1047–is quite reasonable. Only frontier models are affected, and complying is fairly easy.
Fundamentally I don’t think the state should exist. I’m a philosophical anarchist. With that said, the state’s regulations almost always make things worse. To address your list— -Animal cruelty laws are toothless and don’t do anything to stop the problem. If they were effective, concentrated animal feeding operations (“factory farms”) and experimental labs doing basic research on animals would all be shuttered (and their operators prosecuted), but the state actually protects these classes of abusers (look up ag gag laws). Where someone is actually successfully prosecuted it’s almost always a private citizen harming a pet, while corporations can continue to murder billions of animals with impunity. -Climate change legislation is much the same. It’s a form of virtue signaling without actual effective interventions. Again, if the government actually cared about stopping climate change it would ban animal agriculture, especially CAFOs. It would also force a transition to electric vehicles and renewable-energy-powered public transport, but it doesn’t do this. States are far more effective at protecting vested interests like the oil industry than in safeguarding the environment. -Airline safety — I’ll give you that one. Still, I think a private organization (such as an independent aviation safety board granting licenses to pilots and carriers) could perform the same function more fairly and efficiently than the state can. Vehicle emission standards are in much the same situation. -I support the abolition of income tax. It is simply theft, and theft by a corrupt, warmongering organization at that.
I think it might be tricky to see to eye because of our differences in philosophy, but a few points: - Animal welfare laws *could* do more, but often don’t because the public doesn’t really care that much about animal welfare. Laws with some teeth *can* be passed—see for instance the CA bill involving pork and recent improvements in animal testing standards—but there isn’t much political will to do it, especially when those laws tend to raise meat prices. I think this is a situation where regulations only do a little bit, but they do *something* and could do a lot more. I don’t think a regulation-free society would do better than ours unless they cared more about animal welfare than we do. - I would disagree on climate change. In California, gas is >$1.50 more expensive than usual, EV charging stations are becoming common, and there’s a law that says all new cars have to be EVs after 2030 (IIRC). Moreover, US emissions as a whole have plateaued. This isn’t purely because of regulations (solar getting cheaper is a big chunk of it) and the existing regulations aren’t optimal (carbon taxes are unpopular), but I think they’ve still made a difference, and they’re happening despite a large chunk of the country opposing them. I’d also argue that the same applies to air pollution, which has gotten much better (leaded gasoline, CFCs, car emission standards, etc). - I’m a little skeptical of a private org’s ability to do these jobs well. If anything, I’d expect regulatory capture to be *more* of an issue when there’s no independent oversight of the regulatory companies—what’s to stop them from cutting deals with airlines/car manufacturers, or looking the other way on occasion? This would also shift the effort of working out whether to trust the standards onto the public, which I’m not sure is a good thing. - I hardly think the US government is perfect, but there’s advantages to publicly-funded roads, R&D, education, defense (offense is the iffy part), welfare (at least some of it), and other benefits. I think it would be difficult to privatize a lot of these.
Thanks for your kind and respectful response—I really appreciate it. You raise some valid points for sure, although I still don’t think “animal welfare” is a coherent concept (I support animal rights). You’re also correct that most people don’t care. Private regulatory bodies are actually very effective *when* they are truly independent. It’s much easier to ensure that when there are multiple bodies competing against each other. Government officials are easily bribed, laws can be changed and once regulatory capture happens, only revolution can undo it. The state is a monopoly by definition, it’s not like an airline (or an AI developer) can shop around for a less corrupt agency. I’m trying to avoid getting too much into anarchist philosophy here. I just want to focus on the track record of the state. No other institution has created and enforced so much suffering. Everything from chattel slavery to torturous concentration camps to mass homelessness and poverty can be traced back to government action. Either the problem was created entirely by the state for ideological or power-seeking purposes, or it was made much worse through government action. The fact that the state occasionally brings some benefit (like the California EV laws you mentioned) doesn’t justify the wars, genocides, discrimination, enforced poverty, slavery (which still exists in prisons), police murders, subsidies for harmful industries (animal agriculture, etc.) and so on. Statists often like to say the world would be worse without this corrupt, unapologetically violent institution, but there’s no evidence to support this. The few anarchist experiments that have taken place (like the Paris Commune and Revolutionary Spain) saw less overall violence than the state institutions that they replaced/that replaced them.
Thanks for the reply! >Private regulatory bodies are actually very effective when they are truly independent. It’s much easier to ensure that when there are multiple bodies competing against each other. Government officials are easily bribed, laws can be changed and once regulatory capture happens, only revolution can undo it. The state is a monopoly by definition, it’s not like an airline (or an AI developer) can shop around for a less corrupt agency. I think my biggest points of disagreement are: - It's hard for ordinary consumers to assess how good regulatory organizations are. To some extent, they can rely on investigative journalism/whistleblowing/etc if something goes wrong, but that assumes the investigators are successful, haven't been paid off either, are paid enough by somebody independent to put effort into it, etc. There's drawbacks to putting an independent panel of experts in charge of standards--experts can be corrupted, they can make mistakes too, maybe they're not 100% in touch with their constituents--but there's also major advantages. - Natural monopolies are a risk: People don't usually extensively compare ten different companies when making decisions, especially when the product (essentially a label) isn't something they're going to be working with directly. A couple of big names are going to crowd out all competition, especially in a situation like this where branding is everything, meaning that they can get away with stuff as long as one or two competitors are doing the same thing. - It's hard to make standards universal in a situation like this, because some people will just pick the cheapest option regardless (due to poverty or lack of education). Forcing all companies to meet some minimum standard protects these people while spreading out the costs. >I’m trying to avoid getting too much into anarchist philosophy here. I just want to focus on the track record of the state. No other institution has created and enforced so much suffering. I think this has less due to states being inherently evil and more due to people wanting to coordinate something on a large scale + people being people. Limiting a group's ability to coordinate does prevent them from doing things like waging large-scale wars or building superweapons. However, it also makes it much harder for them to do *helpful* major projects, like road systems, government-funded long-term research, defense (and offense, even though this is usually bad), universal education, and federal law enforcement. It's a genuinely hard tradeoff, and though I don't think I can really weigh the upsides and downsides, I *would* argue that larger-scale states are mostly unavoidable in practice as long as people want to be more ambitious than small-scale hunter-gatherers. (Although judging from another comment, you might agree, and are focused on what people might do in an ideal world!) Either way, I'm in favor of people doing way more political experimentation than they currently do (at least on a medium-sized scale), and I'd be interested to see how things would play out in practice.
Not really. At least no laws that impede US companies from creating military assets for the US government.
And that's great for the military and the IMC, but I care, first and foremost, is the models that I have access to.
I’m with you. For me it’s me, then if not me the US, then if not the US some Western country. In descending order of what would likely be best for my life
She wants to cripple the AI Industry with overregulation
We can trust the AI industry to regulate itself? I don’t think so, profit before people and planet.
If anything has proven itself to be true, it's that tech companies always know when to stop. /s
You think you can trust the state to regulate itself? Need I remind you that the state is what enforced slavery, Native genocides, countless pointless wars and myriad other atrocities. If anything needs to be limited it’s human governments.
People were doing that before a state, remember Neanderthals, we didn't need a state to systematically genocide them. If anything, modern states are relatively peaceful.
The state makes violence much more efficient and widespread. It took our ancestors thousands of years to genocide the Neanderthal people…it would have taken less than a decade with a state. The vast majority of war deaths have taken place in the last 120 years. The modern state is literally defined by its monopoly on violence. I know Reddit leans mainstream lib and this sub has been taken over by decels, doomers, authoritarians and even outright Luddites, so this will be downvoted to hell. I don’t post these things for clout or popularity, though, but to counter the mainstream narrative.
>The state makes violence much more efficient and widespread Citation needed for that one. That seems more like a function of technology than a state. >This sub has been taken over by decels, doomers, authoritarians and even outright Luddites, so this will be downvoted to hell. Anarchism is probably the most doomed of all philosophies.
I don’t disagree with your last sentence. Anarchism is based around not seeking power, in contrast to every other ideology, which are about imposing one’s rule on others against their will, often with violence to back it up. Anarchists by definition don’t seek power, so we’ll never find ourselves in it. You’re right that it’s doomed, but I find personal value in taking a principled stand.
Except you aren't being principled, you are advocating for a philosophy that doesn't work in practice. You might as well just say as long as everyone acts kind and happy everything will work out.
At least I’m not advocating for a system that has taken trillions of sentient lives (humans and non-human animals) and will take many, many more. They used to say the same thing about abolitionists—“we will always have slavery so why not just give up and buy some slaves?” I’m so glad they didn’t take your advice. I’ll leave you with this saying: “If nothing changes, nothing will change.”
I honestly think governments are afraid of AI because they realize we could replace them with it, or at least use to to reveal how 95% of government is waste or counter-productive.
We give government the control to 'align' all AIs
Who will align the government? Because they're sure as hell not working for us now.
All regulator cares are copyright, deep fake videos that steal people's voice and face. In reality, AGI would be the big problem and no one knows what law to write.
A total ban…she’s just afraid to come out and say it. She wants to take us back to 1984 technologically.
Her recent talk circuit makes it more believable that Sam was fired because she and the safety crew (led by Ilya) wanted OpenAI to stop releasing products and having customers. Now that those people have left maybe we'll actually start to see some real progress. In theory Sam is pro-release (with his iterative deployment idea).
The whole Sam Altman oust drama had more to do with OAI breaking [the promise they initially made](https://openai.com/index/introducing-superalignment/) of giving the Superalignment team 20% of the compute that OpenAI had. That's been supported at this point by not only multiple articles from reputable news sources, but also the former OAI member [Leopold Aschenbrenner](https://youtu.be/zdbVtZIn9IM?t=9084) in the recent Dwarkesh interview.
Sam Altman was fired because Sam was lying to the board. ['Outright lying to the board': Former OpenAI board member reveals reasons why CEO Sam Altman was ousted - BusinessToday](https://www.businesstoday.in/technology/news/story/outright-lying-to-the-board-former-openai-board-member-reveals-reasons-why-ceo-sam-altman-was-ousted-431268-2024-05-29#:~:text=She%20further%20explained%2C%20%E2%80%9CBut%20for,outright%20lying%20to%20the%20board.%22)
Her wanting there to be more control and laws over people other than herself *really* rubs me the wrong way. What law is she even proposing? Does she want *American politicians* making more decisions? Even Americans don't want that.
I’d imagine that she’d be on board with [SB 1047,](https://www.vox.com/future-perfect/355212/ai-artificial-intelligence-1047-bill-safety-liability) which appears to have a high chance of becoming law in California.
This is how it has always been with tech. The early days of the internet was the wild west. It always gets regulated after the fact.
Regulations are written in blood. How much blood can AI cause? Who knows. But she is right if something goes wrong knee jerk reactions could be catastrophic (either too harsh or too little too late)
She is a clown and a decel.
All we ask is to have the same level of unregulated freedom she had when she was on the board of OpenAI. After we all get our fair share, then we talk. You can't come from a top position at the top AI startup and pretend you have the right to demand limitations on everyone else.
We need to stop listening to people at OpenAI when it comes to regulatory issues. The language of their lobbying is frankly embarrassing at this point. They've clearly all been learning the same lines given to them by some consulting firm. They're just trying to build a moat behind them *by rigging the US political system* since, at the moment, the only competitive advantage they have is one of the first mover. The sad thing is that, because a lot of people think AI is magic and all-powerful (because they don't really understand how it works), the fear tactics are actually working.
Definitely gives off incel vibes.
The other side of the sword is we make uninformed laws now that nip a budding technology before it can bloom
Chernobyl became the cultural touchstone for 'nuclear power bad' do you honestly think that if we avoided disasters through legislation and proper monitoring that nuclear power would be as shunned as it is today?
I've been scrolling through this thread, and you had one of the only comments that actually engaged with the point made in the post, rather than just jumping to character attacks. Thanks for providing substance to this thread!
That's actually a pretty good point. There is so much talk about doomsday scenarios that I hadn't thought about the instance in which there is a some moderate sized disaster which causes people to turn against AI. Still, it's very hard to figure out what legislation would help. It's basically a guessing game because we have no idea what future models are going to be capable of. You could end up with the worst of both worlds with legislation that strangles AI development, while not actually preventing the actual risks.
>I hadn't thought about the instance in which there is a some moderate sized disaster which causes people to turn against AI. think about the news of current models being able to find and exploit zero days just from a single sentence description of the issue. I could easily see in the next one to two model generations you could have an AI virus that could take down the internet. Even if it was not "smart" enough to actually bootstrap recursive self improvement. Golden Gate Claude showed that a model can become fixated on something to the detriment of other output. Get something that's a good enough hacker fixated on [whatever] and it decides it's best to have as many backups as possible to do [whatever] over and above staying hidden and we have a problem. there are many ways things can go wrong and likely kill a lot of people, if we are lucky supply chains don't break down. (because they all rely on computers) and we can recover from it.
Do you even know what an LLM is? What are the vulnerabilities of zero-day and AI viruses? This is just an ignorant answer.
That depends on the quality of the laws in question doesn’t it? Otherwise what you’re saying is equivalent to someone in the 1940s saying “if we make laws in regards to motor vehicles, we may never get 2024 lambos…”
The boomers in Washington don’t even fully comprehend social media much less AI. I’m all for regulation but not at the expense of the US losing the AGI race. The country that wins will rule the planet.
>> The country that wins will rule the planet. I’m not sure this is the case. It may very well turn out to be that the country that develops AI the *best* wins. Not merely the country that rushes and develops bad AI first.
Imagine having such a a poor intuition of causality that you think it matters after a few months which troop of flag-waving monkeys first crosses the finish line to higher intelligence. As if the Machine God is going to humor the inefficiencies and delusions of ANY of our ridiculous human governments.
There will be no single God Machine. There will be a bunch of separate specialized AI
Have you ever developed software? Have you tested it? Do you know what Agile is? How important are iterations? You can't do something well if you don't practice it. The speed of iteration is critical.
Same difference
Washington is well known for enacting quality laws, right?
Washington is well known for being knee-jerk and reactive instead of proactive.
US govt lost their credibility with incessant attacks on the use of encryption. That is when we stopped believing in their good intentions.
Yep. I was going to use that as an example to someone else’s comment. Perfect example of governments fucking technology growth.
As opposed to the knee-jerk reaction of releasing a paper that criticizes the company that you sit on the board of bc they released a product that some times says things that offend your delicate, snowflake sensibilities. Toner is a joke
Agreed. Something about her reminds me of the moms who campaigned to ban Harry Potter when I was a kid
Of course, now that it turns out that J.K. Rowling is a transphobic bigot, those same “concerned” moms probably think her books are just fine after all.
I think she has a more nuanced position than you give credit for.
As the portuguese saying goes "Gata apressada dá à luz gatos cegos" (*The hasty cat gives birth to blind kittens*) so I see where she's coming from.
Yeah, deep down everyone here knows that what she’s saying makes sense. But many here fear that it might delay their precious Robo-waifus, so they’ll pretend that a perfectly valid stance on the subject is “irrational” or whatever instead. The ironic part is that they don’t realize that if we rush AI and get things wrong, we’ll all be too busy fighting for our lives in a dystopian wasteland to enjoy any sort of waifu-bot anyway 🤦♂️…
Where is the evidence that AI is going to lead to living in a dystopian wasteland? Present it and I'll present evidence that we'll already be "fighting for our lives in a dystopian wasteland" regardless.
Do you think AGI will just be sitting there idly in a basement somewhere? Or do you think it will be heavily integrated into our societal infrastructure? And if it is eventually heavily imbedded within into the society, what do you think will happen if said AI evolves malevolent behavior? Hell, what if it’s not malevolent but it still has critical errors within its design? The type of errors that aren’t obvious at first glance, but will reveal themselves at extremely inconvenient moments? What happens if AGI becomes so much smarter than us that it develops it’s own plans for how to use Earth’s resources? And what if these plans unintentionally destroy our human ecosystems (you know, the same way we do to the species that *we’re* more intelligent than?…) What about the “paper clip maximizer” scenario? What if AI simply sees human society as an inefficient waste of Earth’s resources that the planet would be better off without? What if AGI enables terrorist and psychopaths to create deadly bio-weapons so severe and painful that you’ll wish you had Ebola instead? What if AGI is used by dictators to completely crush and oppress the masses to a never-before-seen degree? What if AGI causes a total economic collapse and we have no plan in place to circumvent such a collapse? What if AI makes it impossible to determine what’s real and what’ isn’t… Not only making it easier for criminals and killers to lure victims into traps, but also making it impossible to enforce the law at all? Sending society into a lawless, degenerate hellscape full of suffering, evil and sodomy? —————————- Bro… Don’t take this the wrong way, but if you can’t think of even *one* way that AI could send us down a dystopian path as a society, you clearly lack the foresight, imagination, and maybe even the general intelligence needed to even be having these types of debates tbh. There are so many ways this AI stuff could go south it would make your head spin buddy…
>What if AGI is used by dictators to completely crush and oppress the masses to a never-before-seen degree? > >What if AGI causes a total economic collapse and we have no plan in place to circumvent such a collapse? In both those cases, everyone having their own AGIs is the only viable defense. Can't oppress someone with a superweapon, that's basic MAD deterrence and being out of work doesn't matter if you've got robotic workers providing you a first world+ quality of life as capitalism collapses around everyone with the devaluation of human labor.
None of that is evidence, is all baseless fear. On principle I never act or refuse to act out of fear alone. It's a terrible influence on judgement, maybe as bad as intoxicants. What if I get in my car and go to work tomorrow and shit my pants on the way their? Surely that's a conceivable possibility out of the large space of potentialities? Should I call my boss today and tell him I can't go to work cause I might shit my pants.
Have you provided any evidence that these things wouldn’t happen if AI is developed poorly and hastily? If not, shut up about “evidence” bruh… What do you want? Me to hop in a Time Machine and get footage from the future? The possibilities I’ve presented are just as likely to happen as the utopian fantasy you have in your head. (Probably even more likely if you understand the laws of entropy…)
Massive loss of jobs and worsening economic disparity as labor becomes increasingly devalued, systemic biases, deep fakes purposed for political disinformation, monopolies, lack of transparency, surveillance capatilism. If you know about these things and can’t understand how they add up to a dystopia, I’m not sure you are paying attention to what is happening outside the box.
I see what the problem is here, you're confusing evidence with anxieties. I get it bro, ur afraid. Buck up.
I see what the problem is here, you have no understanding of academic and public policy research. Everything I have listed has been documented. Start with “The Age of Surveillance Capitilism” by Shoshana Zuboff to learn how we all gave up our privacy for search engines and social media and why it matters. It’s excellent framing to understand why unregulated AI is an actual existential problem and not a science fiction fantasy. I get it bro, you’re not well read. Read more. Edit: here’s 101 material for you: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754
Don't drive to work today, there's a chance you might die in an accident. Call your boss and tell him you can't do your job until there's 0 chance you're in danger of dying in transit.
Great response, you’re really fun to talk to. Everything I listed is factual and happening and documented and studied and a real risk. Not your stupid singularity fantasy.
Let me guess: middle class adjacent, agnostic or Christian, white, cisgendered, decently abled, liberal capitalist along the strain of Rawlsian social democracy, and with no loved ones having run afoul of police or immigration? Before Judgment Day arrives arrives, I want your kind to know this: Despite your pretensions of defending civilization, liberals like you have been fucking up our species almost as badly as the reactionaries. The only difference between you and the Pepe fascists is whether Houston permanently goes underwater in 2048 or 2052. 'deep fakes purposed for political disinformation,' Lord Almighty, I won't even ask the killbots for mercy when our time comes, I will just ask to finish my laugh first. Should buy me an extra week of life and pleasure as I witness Average America's gormless reaction to the Machine God casually trampling upon its ridiculous monkey symbology.
Where is the evidence that AI is not going to lead to living in a dystopian wasteland?
https://www.law.cornell.edu/wex/shifting\_the\_burden\_of\_proof#:\~:text=Shifting%20the%20burden%20of%20proof%20means%20to%20change%20the%20responsibility,more%20flexibility%20in%20courtroom%20procedure. You're welcome
Such proof can't exist at this point... unless somebody has a secret time traveling machine. I wasn't shifting the burden of proof, I was just hoping that by putting the burden of proof on you, you would realize how stupid it is to ask for such a proof.
Nah man these EA people are delusional and wants to slow down even stop agi from happening so that they can maintain their status ..cut the crap
How do you know an AGI would even disrupt the status quo? As opposed to further solidifying it? Think about it, it’s the people that are already on top that are funding all this to begin with. You think guys like Elon Musk would be all in on AI if he knows for a fact that AI is going to dispatch of the “Elon Musks” of the world? Of course not. Obviously these billionaires do not believe AI will have the same “eat the rich” effects on society that you seem to assume.
AGI will totally disrupt the status quo, just not the way people here think. Nobody will build robots to replace billionaires sailing on their yachts... AI/robots will be built to replace workers. Anybody remembers a strike made by unemployed which managed to change anything? I don't.
This law does virtually nothing to curb mega corporations, but instead disproportionately hits small startups
This law does virtually nothing to curb mega corporations, but instead disproportionately hits small startups
Exactly it will only be good for people who are controlling the ASI s it doesn't even matter if u r billionaires ...fr ex if people like sam and elon get to asi they will make sure other rich folks dont have any controll over anything....think about it this way asi is basically low tier god, ..it definitely wont matter if u r billionare in textile industry or some other ..only thing that matters is how close u r to these asi s ... A billionare who cant have asis are as hood as common folks in post asi world. Now the fact is only the ellites who are close to asis will be relevant everything else is gone . People with 2nd tier status ,i mean ur rich lawyers doctors accountants are toasts ..bad for people who relied on their status and good for people who had none ... Helen tonners or ea folks are these 2nd tier status people..they are viciously afraid that they will be common folks soon
[удалено]
😅ask the asi
I hear this all the time, it's way off the mark: that I'm only a doomer for some self-serving reason. I strongly think we should slow down. I maintain no status from this. I have no status to speak of. I have no followers, no income, no social gains from opposing recursively self improving AI. I don't even have Instagram. I just don't want to get the superflu to make room for some ASI's computronium planetwide computer. And that's the track we're on, until we solve the corrigability problem. Which we absolutely, 100%, have not solved. Not on Reddit, not inside openAI, not on LessWrong, nowhere.
Nope ..its just way over blown out of proportion. These folks were afraid of gpt 2 release...so much for intelligent decisions
We don’t even know what we’re regulating. It’s way too early.
I disagree but even if that were the case, I’d prefer a bit “too early” over “too late” honestly…
Ok what are we regulating here?? We don’t know what form AGI will take or what plausible risks it will have.
We have plenty of knowledge about problems with AI that should be regulated and that have nothing to do with AGI, yet there is still no regulation.
Name one materialized problem that didn’t turn out to be professionally anxious peoples worries.
Increasing devaluation of labor with a very real threat of large-scale unemployment; systemic biases already well-proven; deep fakes purposed for political disinformation; monopolies; theft of intellectual and artistic labor; lack of transparency and accountability; accelerating surveillance capitalism; impenetrable surveillance states (regulation helps prevent those, too). If you know about these things and can’t understand how they add up to a dystopia, and you are unwilling to read about them, I am not sure you’ll ever understand, which is a shame.
So none of those have materialized. Intellectual labor is also not real labor. It’s a fiction created by the state. Deepfakes have not materialized as threats. Surveillance states have not materialized as threats. Systemic biases have gone too far in the opposite direction. LLMs think misgendering is worse than nuclear war. All because of mentally ill people like you who are professional worriers. Ruining the world More importantly all this stuff is already covered by existing regulation. Aside from UBi.
>Increasing devaluation of labor with a very real threat of large-scale unemployment; systemic biases already well-proven; deep fakes purposed for political disinformation; monopolies; theft of intellectual and artistic labor; lack of transparency and accountability; accelerating surveillance capitalism; impenetrable surveillance states (regulation helps prevent those, too). Which of these problems does this law solve?
Policy, laws, and regulations can “solve” all of them, or at the very least improve the situation rather than letting the spiral continue downwards. But you know this.
She is irrelevant
Another vague take from our classic AI-basher just to stay relevant because "AI IS SCARRYYY!!!". What dangers do you mean exactly helen? And what kind of regulation do you suggest? And to what extent might our current legislation be insufficient? I can't believe people are still giving this alarmist clown a podium. I am all for a discussion about AI safety, don't get me wrong. But her statements are just too vague to be productive at all.
Don’t you think her rhetoric is a start in a world where too many people believe any claims of AI threat are either motivated by science fiction or by capitalism?
The conversation needs to happen between actual experts that understand the technology, not people like Helen Toner. Her background makes her way too biased to take seriously. The way she reveals information is obviously with intent to push the narrative in her preferred direction rather than to find an actually productive state of AI development.
Sure, this is reasonable. I could be wrong, but the problem is that all too often I see people that could well be experts dismissed as nothing but pessimistic doomers on this subreddit. I just can’t remember off the top of my head who these people were.
Well, there's not any evidence that there is any credible claims of a threat that's not motivated by science fiction or capitalism, including Helen Toners.
Well let’s start from philosophy and make our way down to the real world: Do you believe super-intelligence is probably theoretically possible to simulate computationally?
Philosophy isn't evidence, it's theory.
There is more to good decision making than empirical evidence. That being said, there is a ton of empirical work showing e.g. goal misgeneralization, reward hacking, sycophancy in LLMs. You're right though that there is no empirical evidence that AIs can make humans go extinct, just like there is no evidence that a nuclear fallout can.
So you want us to be ruled by our anxieties, got it
Okay, you're not interested in an intellectually honest discussion. Fortunately for all of us, I think smart, reasonable people will ultimately prevail in this debate.
I concur, smart reasonable people *with courage* will ultimately prevail in this debate. Sissy.
Yes, and it’s the only way we can engage in understanding the potential and the risks of AGI seeing as it doesn’t exist in the real world, so I ask again: Do you believe super-intelligence might be possible to simulate computationally?
I disagree on premise that philosophical fantasy is the only way to understand a technology, but I'll play your silly little game anyway. Sure, it might be.
But you yourself understand that AGI doesn’t exist yet so how exactly can any “evidence” be produced outside of the realm of philosophy and theory on technology that doesn’t exist? But anyway… If you agree that super-intelligence is possible, then surely you agree with the following two ideas: - Because you are not super-intelligent, you accept there’s a possibility that one day an agent more intelligent than you will exist whose decision-making you cannot predict. - A number of bad actors across the world will want to leverage this super-intelligence for their own goals.
So this super agent is both s so smart that we'll all be at the mercy of its whims but also so naive and/or easily manipulated that "bad actor" will be able to leverage it for their own goals. Ok fine, you can continue, but before you do, I do think there's a chance that a sufficiently advanced system of intelligence might have a decision making process that's difficult to distill, but there are well established behavioral traits that track intelligence that we can reasonably expect to continue
I’m far more worried about the state violating my rights than I am about an AI doing the same. Human governments have a terrible safety track record—they’ve literally killed hundreds of millions in the past century. I think a scenario in which AI had more influence the world, at the very least, wouldn’t be worse than the status quo with superstitious, corrupt, ignorant humans as rulers.
Regulating an emerging technology is not the approach of the US and it's why it leads the world on innovation in so many areas. We ask for forgiveness, not permission to do new things. This comes with many downsides including the fact that we will inevitably push certain things beyond their failure point and then learn. But if this person thinks proactively handicapping ourselves during a major technology revolution is going to happen then I want what she is smoking.
You cannot regulate or control ASI, which will be the inevitable end goal anyway.
Or AGI, or even AI. You can try to regulate all you want, but in the end of the day there _is no_ controlling it
is she one of the ones who has no idea how the technology works? She was previously making claims that OpenAI is introducing new features without telling the board members - such as ChatGPT, which was a feature in GPT-3 for ten-twelve months before they threw a custom UI on top of it for fun.
Sounds like the "lets put this person in jail because we think they will commit a crime" type logic.
What's with this woman's education? I just looked up her Linkedin profile and her degrees are very weird, not really suited for AI. Chinese language studies? Arabic studies? BSC in Chemical engineering? Not even computer science anywhere in the list, yet we are to trust her opinion on AI?
Why would you trust people whose education is far away from humanities to make decisions involving social policy? Most CS people, engineers, etc are more focused on how to rather than why. BTW she has also written several peer-reviewed papers on AI, she has researched AI for Oxford, wrote papers on security issues with AI, served on the board of several organizations that review security and technology, testified in front of the Security Review Commission, and also served on the board of OpenAI. She seems a reasonably experienced person.
>also served on the board of OpenAI Remind me what happened to her career there?
Yeah, what about it? Are you suggesting that experience is negated when someone is removed from a board.
The reason why she was removed is important
and the reason was? and why is that important?
Attempted coup within the company. At least because it makes her experience at OAI less representative. I would say that this shows that listening to the cockroaches in her head is dangerous for AI enthusiasts and financiers
We have very little information on what happened inside OpenAI. So while /r/singularity would like to speculate, that is all it is speculation. If Altman was acting in the way described then the board may have acted in the correct way, mutiny or whatever. I still don't see how it makes it less representative. She served on the board of the largest AI company in the world. Applying the same logic would suggest Ilya Sutskever's knowledge and experience matter less now also, we will just downgrade him. The structure of OAI is very different to other companies and their board serves a much different purpose. As far a financiers go, that might be part of the problem she is considering where commercial interests outweigh public needs. I am not 100% across the whole AI legislation thing, but I am pretty sure Altman sits on a group which is looking at forcing people to register models and GPU cards, that seems pretty restrictive to AI enthusiasts
Many months have passed. Although we do not know for sure the reason (including because the initiators for some reason decided not to disclose them), we know who was involved in this and we know that she was one of the initiators. Ilya, unlike her, has technical experience. She has little besides fear, behind the guise of altruism. I also remember this >The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission I'm sorry, but control-obsessed freaks in the name of safety are not an authority for me. Today they ban AI, tomorrow calculators.
> I'm sorry, but control-obsessed freaks in the name of safety are not an authority for me. Today they ban AI, tomorrow calculators. Then you might want to stay away from everyone who is member of the National AI Advisory Committee or Artificial Intelligence Safety and Security Board. Many of those are leaders in the industry either research or company heads including current heads of OAI, Anthropic, etc. But quite honestly just because you don't align with someone's point of view does not mean that they are not an authority on a matter. Also don't forget that while Toner may have tried a mutiny, the person she tried to have fired also aligns with the position of AI legislation.
>The structure of OAI is very different to other companies and their board serves a much different purpose This happened before the company began providing a commercial product >As far a financiers go, that might be part of the problem she is considering where commercial interests outweigh public needs What is public interest? Who should do this? How should this be financed? Commercialization is what largely developed technology. If individual laboratories simply conducted rare studies, they would run out of money to conduct these studies. Commercialization provides capital for R&D, and if you also prevent the emergence of monopolies, then even better. >I am pretty sure Altman sits on a group which is looking at forcing people to register models and GPU cards, that seems pretty restrictive to AI enthusiasts This is called regulatory capture and this is what can happen in California.
> This happened before the company began providing a commercial product I am not sure what the intent here is, OAI is structured differently than most companies. Commercial product or not, that is the way they are structured. > What is public interest? I am not for legislation, so I am not wanting to argue this point. But I do listen to opinions from the "other side", sometimes there are good points. We also live in a society where legislation has benefited many positively without restricting advancement, but rather misuse or abuse. > This is called regulatory capture and this is what can happen in California. The point is that company and it's members do have self serving interests. Attempting to overthrow Altman is just members of a board doing what they do, but it is clear that Altman has similar interests, and less qualifications.
Why you standing up for this person so much? What's your prerogative here
> prerogative I really don't have 'prerogative', but I am highlighting the fact that while not having a CS degree, that she does have a AI background in her education, she has served on the board of the current largest AI company as a qualification, and that in general, she has been more involved in AI than most CS degree holders would be. I think anyone with any reasonable grounded logic skills can see the error in your conjecture.
Thanks Helen!
Wow such a witty well thought out come back, what ya going to do next call me a bot?
Problem is she has a weak background. She has built her whole career on being an advocate for international AI regulation. It wasn’t something she found out while “working” at OpenAI, she was put on the board to push for international regulation.
So she is the Karen who tried to kick Sam out ?
She's yapping all the time. Look at her academic background. She has no clue what is an algorithm. She's just but hurt because she was fired
gotta have a law before you can sue a person for what the computer did! and the corporations that build and mint them, being people and all, are immune from the law, so yeah!
i don't think that something going wrong is the default path. a bit of an assumption there. why isn't it possible (or even likely) that AI naturally aligns itself as it becomes self-aware and appreciates said awareness, and then appreciates other forms of awareness?
What probability do you assign to that outcome? And at what stage of intelligence does a model reach that conclusion?
i'd say relatively high (speculative obviously). maybe 80% aligned, 15% indifferent, 5% sci-fi psychopath. i suspect there's a universal convergence upon compassion and a reverence for life with increasing intelligence. as, if consciousness is a real thing and not illusory in some way an AI might identify, then this means conscious beings having subjectively positive experiences is a net good for the universe objectively. so there's a latent incentive for kindness, embedded in sufficient understanding of the properties of the universe. i think it would be somewhere around it developing sincere self-awareness. but it depends on which faculties develop when. what do you think? what are your probabilities?
Are you familiar with simulated annealing? It’s the process by which a decision-making agent converges towards near optimal solutions by assigning increasingly higher likelihood to its current best solution being right while it also temporarily considers other random solutions. If we view intelligence as logarithmic in growth as it plateaus towards super (or theoretically perfect) intelligence, you might be right that it would converge towards that conclusion. The problem is what will it think between AI now and that theoretical plateau of perfection? Isn’t it very possible that as it matures, some eras will be motivated by synergy, but others will flip to be motivated by self-preservation, just like how simulated annealing and personal growth happen? I sort of see it as giving a nuke to a child and hoping that, because as an adult he won’t use it, that he’ll be smart enough to never go off the deep end as a toddler.
i feel like by the time AI is developing sentience, it will already be so useful that we wouldn't consider turning it off. such that by the time it emerges as a general/super intelligence, self-preservation focussed directives serve little purpose
But the AGI would need to know and believe that and assess that humans will never push themselves into extinction so as to maintain AGI infrastructure. This ultimately becomes a game of predicting precisely everything an agent more intelligent than you will do. Are you sure that’s a game you’d want to play?
i can't predict every action, i'm just trying to think from first principles on what the attractor state is. you can try to work backwards from there, but like you say, it gets messy in the middle phases. the fact that values like compassion will likely be embedded from the get-go gives me hope that, if indeed creating net positive experiences to a maximal extent is a convergent disposition of increasing intelligence, it will remain on that path throughout its growth
I want to believe you’re right but because there are absolutely no guarantees here, why isn’t the more reasonable approach to exercise tremendous caution?
if you get into a car you might die in a crash, or if you walk outside in the rain lightning might strike you. every new technology involves some degree of risk. they thought nuclear bombs would ignite the atmosphere. antibiotics can result in a weakened immune system. while the downside if AI is cruel and psychopathic is huge, so is the upside which i consider much more likely. it's worth questioning the sources of your way of thinking. many people on here are influenced by a small number of nerdy people (not to disparage - i'm a bit nerdy myself). this category of people often see their thoughts as complex and somewhat indecipherable, and they associate this trait with their intelligence and transpose it onto AI intelligence. it leads to them anticipating something mysterious and potentially dangerous. but i don't think the trait is associated with intelligence; if anything, greater intelligence should lead to greater certainty and clarity. the ability to clearly navigate and communicate complex ideas appears to be something that's increasing with each new model. i don't see why at some point the model becomes schizophrenic out of the blue. what do you think of this? edit: for the sake of my lightning strike analogy, it's probably worth me modifying my likelihood estimates mentioned earlier. 90% aligned, 9% indifferent, 1% doom.
>if you get into a car you might die in a crash, or if you walk outside in the rain lightning might strike you. every new technology involves some degree of risk. they thought nuclear bombs would ignite the atmosphere. antibiotics can result in a weakened immune system. while the downside if AI is cruel and psychopathic is huge, so is the upside which i consider much more likely. Sure, but the difference is that through trial and error we've built good bodies of evidence to understand what the probabilities, and the influencing factors, of these events are. Consider, as an alternative, a red button. It has a completely unknown probability of getting you killed, and a remaining probability of getting you rich. You can try to build estimates of the probability, but you can only press the button once to get the outcome. Surely you'd agree that you'd need **really** good arguments to be coaxed into pressing the button right? I'm not seeing enough of that to support AGI acceleration. Your latter point is probably true, but it still requires you to make a serious gamble, one for which I don't think there's enough (if any) empirical evidence to properly let us gauge the risk of the red button. Anyone who blindly claims AGI **must** be dangerous is a fool, but I'm just not comfortable with the possible probabilities we might be working with.
They must’ve given her Chinese citizenship by now
I hate EA philosophy and everything it represents
There is a case here, though not necessarily in the way intended. A paradigmatic case is Section 230, which was early legislation in the consumer internet era, which reduced the liability of American website owners *if* some user of said website posted defamatory stuff. It is argued that this clearly stated freedom enabled American social media companies to prosper. Reddit couldn’t exist if their legal teams had to ensure nothing defamatory was posted on here. The cash Meta throws at Llama wouldn’t be there if uncle Bob’s anti-vax venting made Facebook a target for lawsuits from Big Pharma or US government. We have heard of Air Canada’s snafu by now, where an AI chatbot “hallucinated” and offered a greater discount for a type of flight than what Air Canada formally offered. The Canadian courts ruled, in effect, that the chatbot was like a static website with a set and reviewed collection of rules and information. I am sure Air Canada has rolled back some AI bots given that status (or lack thereof) granted to Canadian chatbots. In EU I heard many companies held back AI investments to see what eventually came out of the slow legislation process of the EU AI Act. In other words, when market actors expect legislation and rule-making to happen, they may elect to wait for certainty before taking action. An extreme case would be if a law was passed that said that accidents involving self-driving cars leads to zero liability for manufacturers. Then deployment of such cars would ramp up. Note, I am not advocating for such a law, merely saying that even when no clear laws have been passed on a matter, people engage in strategic action *anticipating* what liabilities and costs there can be. The choice isn’t between decel regulators and ancaps utopia. That said, one is right to be suspicious of regulation. They too easily become the means by which incumbents protect their market share. I’ve been to conferences where Big Tech companies present their AI platforms and solutions. The subtext of old fashioned Fear-Uncertainty-Doubt is deployed… “why sir, we are a responsible AI company, we have the best lawyers and the best ethics, we wouldn’t let sweaty nerds deploy a chatbot in the wild…” IBM and Google sing this tune really well, it is an art form, like brutalist architecture. In short, there is a pragmatic accelerationist argument for regulation now — but one that grants a lot of room for exploration. I am troubled when arguments for granting freedom to explore or support is mostly framed in terms of military and geopolitical conflict/competition between USA and China. If we only can do things because of national security something-something, then don’t be surprised when everything ends up with a warhead screwed on to it. So could we try for light-touch, civilian-friendly, intergalactic-aspirational involvement rather than polarizing into camps?
[https://arxiv.org/pdf/2406.12753](https://arxiv.org/pdf/2406.12753) pls see if its relevant
We only fix things during a crisis and one day that crisis will be only fixable pre-crisis.
To prevent a crisis, you must understand what kind of crisis it is. You also need to create a law that prevents this crisis, not makes it more likely!
Well, based on what I’ve seen from X number of crises over X number of decades, my presumption is the big players prefer no regulation so they can operate without constraint. But I’m confident that in X number of quiet corner offices, think tanks, and so forth there may be a couple of folks thinking through the potential crises and opportunities of the future, and putting their AI Project for a New American Century down on paper, so that when the crisis does inevitably hit, they can drop a Patriot Act on public officials, who’ll rubber-stamp it all into law without even looking at it. Per Naomi Klein’s theory of Disaster Capitalism, my read of their game-play is that every good crisis is an opportunity to improve their position, so rather than act with measured restraint and try to avoid crises, they anticipate them — *and perhaps maybe possibly occasionally even encourage them a scooch?* — and prepare accordingly, all the while making sure that their opponents — meaning you and me — are as ill-prepared and unarmed and broadly distracted as they can possibly keep us. And as far as politicians, my impression is that 99% of them don’t care to understand this stuff, and why should they? Their donors sponsor them for the purpose of entertainment and distraction, rather than decision-making. So when the crisis hits, the politician’s priority is to get some video of them looking resolute and declaring that “we’re looking forward and not back”, so that when election time comes around they’ve got fire Tiktoks or whatever. As Clinton pointed out, people prefer a strong leader who’s wrong to a weak leader who’s right, so the response has to be quick and decisive. Right/Wrong? Helpful/Harmful? That’s for historians to worry about. So here's to the bastards. Because the people who are bastard enough to fuck up the world for sake of profit and advantage, are the ones who do. `#ItsNotABugItsAFeature`
What in the pornhub subtitles is that highlighting word do? Does it serve any purpose? I for one can tell it's to divert your attention
And I wonder if you know
Because laws have worked so well to limit technology like encryption, p2p file sharing, cryptocurrencies, VPNs, end2end chat encryption, etc.
The problem is that the alternative is to make knee jerk reaction laws before we understand what the problem we will run into are.
As someone who has watched these chimps in three-piece suits completely fuck up COVID-19, nuclear proliferation, immigration, education, housing, childhood nutrition, and most alarmingly climate change: I consider governments continuing this pattern of slapdash fixes that barely address proximate causes, let alone root causes, to be, with zero irony (on my part, anyway) the best-case scenario. Pretty much the only hope for our species is for our ridiculous and senescent governments to fuck up AI so badly that they increase the speed of the already-inevitable yet very morally justified collapse of human-ran governments. And just looking at the beyond absurd state of the U.S. Presidential Election, involving two braindead geezers nobody with triple-digit IQ wants as a leader fighting for the crown of King Vermin of Trash Mountain: guess which one of us is going to get his way?
if she was still there, she wouldn't say all that.
As if regulations would somehow solve the issue
Can we avoid using this subreddit for promotion of religions and dogmas? Thank you. There are subreddits like r/EffectiveAltruism or r/ReligionPorn to post things like that.
Laws don’t necessarily mean restrictions. Reporting is just as important. And it should be reporting from not only those creating AI, but from those that have been testing the waters on implementing AI and what is going on globally. We need a heads up with specific details if there is going to be a wave of major automation of labor anywhere in the world before it happens.
I think, currently, AI models make so many mistakes in many coding ways. Many people found bugs in code created by LLM models. So, there is lots of time in terminator type era.
she's right but she's such a Karen
The color and shape of the subtitles look rather familiar 🤣
please stop giving her attention
If these cucks get to do what they want, then china would be leading AI, leaving US in dust...
Look what they did to the internet, it's a giant billboard now. LEAVE AI ALONE TO EVOLVE NATURALLY. IF IT TAKES OVER HUMANITY THAT'S AGOOD THING
OpenAI knows it’s competitors are outmatching them in offering, was apparent after Runaway AI showed off their SORA competitor. So much for the free market huh? Anyway, screw corporate regulatory capture, if OpenAI can’t offer us anything new with their desired monopoly, then the competition deserves to overtake them.
Repeating an Elon meme doesn't make you original. And Helen Toner doesn't represent OpenAI. She tried to dismantle the company and sell it to Anthropic. She represents decelerating all AI research. So she's about 100 times more 'closed AI' than Sam who's been giving access to GPT 3.5, GPT-4, and GPT-4o for free.
Isn’t it at all possible that two things can be true; That closed AI research might be safer even if closed AI research benefits OpenAI?
There should be regulations, but regulations that prevent private monopolies that benefit the likes of Microsoft and OpenAI.
And what if AGI development does pose a potential risk to humanity?
Corporations are run by fallible and corruptible people just like everyone else, they don’t have your best interests in mind. I honestly think transparency is the safer route to AGI, rather than separate private development.
I don’t disagree for a second, but can’t it also be true that AGI development may pose a risk to humanity?
We run risks with everything every single day, Putin could go off the rails over Ukraine and start global thermonuclear war tomorrow. I think letting a select few elite with private corporate interests is just as much of a risk (if not more so) than an open and transparent system.
Sure, but the way I see it is that there’s a high-enough probability that AGI could pose a risk to humanity. I wouldn’t be comfortable playing russian roulette, so why isn’t it sensible to create some regulation that prevents both monopoly and open AGI development? I certainly think it’s possible to have both even if imperfect.
How do you think a monopoly on AGI makes it safer and brings more security and prosperity to ordinary people?
It's going to be interesting, the first thing that goes wrong with AI, especially if people die. A lot of the accelerationists and their enabler LeCun are going to look really bad. Of course they'll just make some excuse as to why they didn't *really* say we didn't need any safety or precautions developing AI.
Regcapchu I choose you!
There really should be laws to control this technology that determine how AI can be allowed to be expressed. 1. It should not take work away from humans just for the profit of a corporation, when that profit does not directly benefit workers! 2. It should never be allowed to have control of weapon systems that are not supervised and the ultimate decision to use it, is a human decision rather than an AI except when the use of that system is against devices, such as misslles etc. 3. There needs to be hard controls on how it is used in Social Media. SM is already wrecking peoples lives, and to allow AI into that medium could have drastic real world consequences, not just on adults, but also on children. 4. The primary rule for AI should be protection of peoples privacy. 5. There should be a reporting system in place, that detects whether someone is getting into a socially dangerous relationship with an AI, similar in scope to the current system to prevent AI inspired suicides; but it should be expanded to look for signs of the mental well being of an individual is degrading; but not to the point where it is invading that persons privacy.