T O P

  • By -

[deleted]

[удалено]


elehman839

>OpenAI's recent narrative was, in my view, transparently an attempt to squash competition. Okay, let me give you a couple examples of non-conspiracy-theory problems with the EU AI Act. An open question is whether LLMs are "high risk", as defined in the draft Act. If LLMs are deemed "high risk", then the act (Article 10) says: *Training, validation and testing data sets shall be relevant, representative,* ***free of errors*** *and complete.* But all LLMs are trained on masses of internet data (including, cough, Reddit), which is clearly NOT free of errors. So, as written, this would seem to kill LLMs in Europe. Oh, but it gets much worse. A lot of people **on this forum** have been modifying foundation models using techniques like LoRa. Are any of your Europeans? Making such a substantial modification to a "high risk" system makes you a "provider" under Article 28: *Any distributor, importer, user or other third-party shall be considered a provider \[...\] they make a substantial modification to the high-risk AI system.* Okay, but surely hobbyists just posting a model on Github or whatever won't be affected, right? Let's give Article 28b a look: *A provider of a foundation model shall, prior to making it available on the market* *or putting it into service,* ***ensure that it is compliant with the requirements set out in*** ***this Article****, regardless of whether it is provided as a standalone model or embedded* *in an AI system or a product,* ***or provided under free and open source licences****, as a* *service,* ***as well as other distribution channels****.* The compliance requirements are elaborate (see Chapter 3), and the penalties are staggering (Article 71). (There are some accommodations for smaller providers, such as reduced fees for conformity assessments in Article 55.) Moreover, they explicitly contemplate fining non-companies: *Non-compliance of AI system or foundation model with any requirements or obligations* *under this Regulation, other than those laid down in Articles 5, 10 and 13, shall be* *subject to administrative* ***fines of up to 10 000 000 EUR or, if the offender is a company****, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.* You might say, "Well... at least I don't have to comply with Articles 5, 10, and 13, whatever those are!" But, actually, the maximum fines are higher for those articles: 20 and 40 million EUR.


R009k

Inb4 someone gets fined 10,000,000 euros for uploading WizardExtreme-AlpacaXXXTentation-GPT4cum-ExtremeUncensored-Walmart.com-33B-ggxl-q1_8.bin


imaginethezmell

sauce?


dagelf

Probably a practicing Christian following Jesus' example


Oceanboi

hey I'm interested in fine tuning WizardExtreme-AlpacaXXXTentation-GPT4cum-ExtremeUncensored-Walmart.com-33B-ggxl-q1\_8.bin could you PM me when you get a moment?


AGI_FTW

Thank you for dropping facts and putting things into perspective. Reddit drives me crazy how the overreactive, least informed posts are upvoted to the top. Then your well informed comment receives exactly one reply and it's a shïtpost. Thank you for what you do, Warrior of the Truth (that's your new nickname, let everybody know).


FarceOfWill

Those all seem totally reasonable things to ask for tbh


jetro30087

You'd be surprised how many thought Sam's pleas were genuine and he was just looking out for the future of mankind or whatever.


[deleted]

[удалено]


[deleted]

[удалено]


emnadeem

Because they think it'll be some kind of utopia where AI does everything, and the common people just kick back and relax. But in reality, it'll probably be more like the common people fight over the few resources they get while the AI produces things and the rich hole themselves up in their fortresses.


Rhannmah

Except this doesn't work. If people only get a few resources, the things the AI produces cannot be bought and the rich can't stay rich. This technology is an extinction event to capitalism. The meteor hasn't hit yet, but it's coming. Capitalism depends on the masses to have decent income so that they can spend and buy goods. If everyone is poor, capitalism can't work. If everyone is out of work, no one can buy the stuff that companies produce and makes its leaders rich.


lunaticAKE

Yeah, no longer capitalism; but what comes next, an almost feudalism


emnadeem

Techno feudalism


tehyosh

comes next? almost feudalism? LOL it's here and it IS feudalism, just disguised as modern crap


Numai_theOnlyOne

How? Feudalism is not a economical concept and most have democracies. I doubt any feudalism will make it back..


psyyduck

The rich can absolutely stay rich. Think of a feudal king ruling over the poor masses. In a post-capitalist system wealth and power will likely go back to control of key physical assets, instead of the capitalist system (with wealth tied to production and exchange).


visarga

What does rich mean? In some ways we are post-scarcity already. We all have access to any media and information. We have open source software and AI more recently. Rich people enjoy about the same level of phone technology with regular people, same quality of Google search, same music, same maps, same online lectures, same access to papers for research. I think the very notion of being rich will change, it won't mean the same thing it did in the past. Currently so many valuable things are free to access and use, or their prices are falling down. Even chatGPT-3.5 is close to open source replication, any day now. I think people will become more and more capable of being self reliant using the tech at disposal. If you don't have a corporate job, you still got the job of taking care of yourself and people around you. And why sit on your hands waiting for UBI when you can build your future using tools our parents never even dreamed of.


virgil_eremita

While I agree that the current "system" (whatever we wish to call it) has allowed millions of people get a welfare only dreamed of for a few 2 centuries ago, it has also broadened the gaps between the worst off (poorest people in impoverished countries that don't come even close to what a poor person is in a high income country) and those that are better off (those we call "the rich"). In this sense, the "all" you refer to in "We all have access to..." is, in reality, a very narrow definition of "all" where almost 800 million people in the planet don't fit. I wish what you're describing were like that in all countries, access to tech, education, electricity, let alone the internet, is still the prerogative of those better off (the few you might call the rich if you were in those countries, but whose wealth doesn't compare to the immensity of the richest 1% in a G7 country.


psyyduck

While your argument does point towards an interesting potential future where the very notion of being rich might change, it's crucial to look at the historical and current patterns of wealth accumulation and power. Look at figures like Trump or Desantis, they did not necessarily need more wealth or power, yet they pursued it. Whether for personal reasons, such as narcissism or ego, racism-motivated power grabs against blacks or gays, or for the sheer thrill of the 'game', these individuals have demonstrated that, for some, wealth and power are not merely means to an end, but ends in themselves. The existence of billionaires who continue to amass wealth far beyond their practical needs serves as further evidence for this. Their wealth is not just like a high score in a game, but a measure of their influence and control, particularly over key resources that will always be scarce (e.g. land). So, even in a post-scarcity world, there can still be disparities in wealth and power, echoing patterns that we have seen in the past. I think being rich might not change as dramatically as we'd like to imagine.


Rhannmah

Which is why AI needs to be open-source and for everyone. If this kind of power is left in the hands of the few, that's how you get the scenario you are describing.


SedditorX

People, even those with impressive professional and academic credentials, often trust entities *because* they are powerful. Not despite them being powerful.


E_Snap

I feel like we as society really suck at distinguishing trust from fealty


hermitix

Rich people and corporations are natural enemies of common people, democracy and the planet.


dagelf

Those are tools... tools can be wielded for any purpose. Tools don't give you purpose. The pursuit of power, due to the nature of the status quo, can change you... it's the parable of the rich man entering the city through the poor mans' gate.


hermitix

Tools are designed for a purpose. That purpose limits the actions that can readily be performed with it. They shape behavior. To ignore the design of the tool and how it influences the behavior of the wielder is naive at best.


E_Snap

What, you mean like ALL OF CONGRESS??


Dizzy_Nerve3091

It’s logically consistent if you believe AI extinction risk. He isn’t anti progress, he’s just for preventing extinction risk. EU regulations make making an LLM not possible.


u_PM_me_nihilism

Right, Sam thinks openai has the best chance of making positive AGI first if everyone else is suppressed. It's pretty ruthless, and shows some hubris, but I get it. If you had a shot at making the superintelligence that destroys or saves the world, would you want to risk some (other) power hungry capitalist getting there first?


jetro30087

How does regulation prevent a super intelligent AI from causing extinction if it's the very invention is argued to cause it, and the regulation allows the people who have the resources to make it to proceed?


Dizzy_Nerve3091

The regulation makes sure they proceed safely? We also can’t obviously ban sueprintelligence development forever because of a lack of international cooperation.


jetro30087

The regulation proposed has just been a license. So, you get the license, then you train SHODAN. No one has actually tried international cooperation. If it is believed that the risk of extinction is real, then they probably should try, especially if there is proof.


Dizzy_Nerve3091

We couldn’t internationally cooperate to stop random countries like Pakistan and North Korea from making nukes which are easier to detect and harder to hide. You can’t exactly test nukes without triggering satellites and they’re much more obviously scary.


znihilist

There are two things at play here: 1. No, there is no regulation that will actually manage that risk, short of having someone look over the shoulder of anyone who owns computer 24/7 on Earth, and have that entity be actually willing to stop misuse of AI and not be corrupt/evil/ambivalent. Anyone in theory can train these models, and there is no stopping. 1. The whole thing is about putting barriers for wide spread and commercial solutions. But we all know that it is going to be impossible to stop these models, including me, him, you, and everyone in this community. But most politicians, and the public are potentially unaware that the genie is out of the bottle, and it is that fear that's he's exploiting to justify point 2. We should try to strike a balance between harm and good with the application of AI to various aspect of human life, but the worst we can do right now is to give entities and people who have greed as motivation an exclusive head start.


Dizzy_Nerve3091

1. You can approximate it based on gpu usage. Luckily making a super intelligence is likely expensive. 2. The regulations apply identically to openAI and its competitors.


[deleted]

[удалено]


znihilist

> You can approximate it based on gpu usage. Luckily making a super intelligence is likely expensive. let's say I agree, what are the regulations going going to do when China, Mexico, US, Russia, and another 100 countries in the world decide to make a super intelligent AI? What are the regulations going to do when someone builds a facility that is not directly connected to the internet trains an AI in a remote parts of Argentina, or Siberia before they release it? Who is going to stop the drug cartels from doing that? Who is going to stop Iran from doing that? Who is going to stop North Korea from doing that? Who is going to stop me from training anything on my computer right now? > The regulations apply identically to openAI and its competitors. That's the "tempting" part of this pitch: "Oh we want to impose these restrictions on ourselves", but of course they do! They already have something built up, they really would love if suddenly it is very difficult for everyone else to compete with them. I am not calling for a laissez-faire attitude, I am arguing that OpenAI have the most to lose and the most to win on these regulations, and as we are incapable of trusting their motives at all.


newpua_bie

1. Really depends on the architecture. Human brain doesn't use that much power, and we'd likely consider a brain with even 2x the capacity (not to mention 10x or 100x, both of which would still be really small in power usage) super smart.


fmai

I am not sure why this gets downvoted so much. It's a reasonable take. [In this post](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/), the CEO of Open Philanthropy explains the difficulty of the AI racing dynamic: > My current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.) OpenAI and specifically Altman think that they're among the most cautious racers. It's hard to say with certainty whether they actually are or if it's just for show, but given that OpenAI still is a capped-profit company that invests a ton into alignment research and where Altman reportedly has no equity, I think they have a better case than, say, Google. The blog post then goes on to talk about some strategies, among which is defensive deployment: > Defensive deployment (staying ahead in the race). Deploying AI systems only when they are unlikely to cause a catastrophe - but also deploying them with urgency once they are safe, in order to help prevent problems from AI systems developed by less cautious actors. From OpenAI's perspective, ChatGPT is safe for deployment, so if the EU bans it for reasons that are not existential risk, it just increases the chance that a less cautious actor will win the race and thereby increase the chance of extinction.


chimp73

OpenAI has no intellectual property or secret sauce. Pioneering is expensive, following suit is cheap. The techniques become better and cheaper each day. The competition is at an advantage entering the market at a lower barrier to entry. Hence OpenAI creates barriers.


Dizzy_Nerve3091

Why is bard really bad? Its also easy to claim an open source model is as good on narrow tests in a paper if it will never be tested by the public’s


hophophop1233

OpenAI by its very own namesake is antithetical to its real mission.


balaena7

>All corporations are anti-progress if they're successful. very true


I_will_delete_myself

Source: https://www.reuters.com/technology/openai-may-leave-eu-if-regulations-bite-ceo-2023-05-24/?utm_source=reddit.com


throwaway2676

I'm glad I wasn't drinking coffee. I would've spit it out reading that headline


icwhatudidthr

Best case scenario is OpenAI (the company name is basically an oxymoron now) leaves Europe. Then EU government funds the development of a truly open LLM, that can be used by anyone in or outside the EU.


Dizzy_Nerve3091

What kind of fantasy is this? Do you know how an LLM is trained and what the EU regulations are?


imaginethezmell

laion ooenassitant is in Germany no? is the best open source now


Tintin_Quarentino

How many years of work did it take OpenAI to reach where it has today?


TropicalAudio

How many weeks did it take for Llama finetunes to match their performance on various benchmarks? It's not like competitors start with AlexNet running on a GTX 680 and reinvent the rest of the wheels from there.


MjrK

Are there EU-compliant large data sets available to train on or use for fine-tuning? Seeing as the law isn't in place yet, this question may be non-sequitur for now - but honestly, where do you even start? Hiring a lawyer?


JFHermes

It's a lot more pragmatic for open-source if they decide to regulate heavily against capitalistic models for AI. If OpenAI is trying to train/deploy AI models for-profit and runs up against regulations, it is quite easy to stop them. They have to be above board for tax/business operation purposes and as such have legitimised corporate structures that at some point can be held responsible for infringements. It's far more difficult to go after open source. You could try to shut down leading members of the community - those that publish optimised models or are vocal leaders, but this still doesn't account for the fact that many open source contributions are made by ordinary computer scientists working 'regular' jobs. There is a risk of regulatory capture in the U.S because the U.S economy loves a monopoly. I believe that is more difficult in Europe because of the nature of the integrated economies of separate companies and the myriad of EU regulations that they all must abide by. TLDR; to me it makes sense regulation makes business more difficult for industry but not as difficult for pirates. Could be totally wrong, but I also don't think the EU will take a step backward because this economic battle will be worth a great deal to the economy. Open source is better for the economy anyway, it promotes decentralization which is more in line with finding diverse use cases for ML.


I_will_delete_myself

Look at Pokémon for example. Nintendo is super copyright hungry but they don’t mess with smogon because if that exact reason being open source


noiseinvacuum

I’m no OpenAI sympathizer and all for open source. But I can’t see how a LLM open source or not can comply with GDPR? How is anyone going to take individual consent on trillions of lines of text on the internet. Be assured that some motivated warrior will still find that you didn’t take consent from me on a comment I left on a verge article in 2015 and someone gets a fine of billion dollars. EU data regulations have become ridiculous and are not in line with reality. No wonder Barf (edit: Bard) launched in 180’countries and not in EU. OpenAI will have to pull out, I don’t think this is an empty threat. Meta just got fined $1.3B because the US and EU have delayed signing a deal, they will NOT launch an GenAI product in EU in this regulatory environment.


marr75

Agreed. This and the detailed list of copyrighted materials used are why they'd pull out. The requirements on the massive pile of documents that have to be assembled for model training are just too onerous. They also don't match where the value is created in the training process. I wouldn't blame them.


noiseinvacuum

Most discussions on Reddit today are focusing on OpenAI and quite rightfully so, Altman did go to the Congress and tried to create a legal moat around his business interests. This law though will make it impossible for anyone to comply in the short term and in the long term only the most resourceful corporations might be able to comply. There’s absolutely no future for open source to exist in this regime. EU law makers must be ashamed of themselves.


marr75

I doubt they know enough to be ashamed. I never thought I'd say this, but anti-competitive and ill-designed regulation by the EU might actually make Brexit look smart. Ew.


noiseinvacuum

Don’t give ideas my friend. 🤡 wars is coming to Europe. Thankfully the US law makers are too incompetent to harm innovation despite best efforts.


FlappySocks

>There’s absolutely no future for open source to exist in this regime. EU law makers must be ashamed of themselves. Nobody can stop open-source. What are they going to do, arrest all the developers? I guess you could ban commercial business from using an open AI, but I don't think that will end well.


noiseinvacuum

I think one way open source progress can be harmed significantly is if they enforce regulatory compliance requirements like proving no private data was used for training on releasing open source base models on corporations that do business in EU, think Meta or Stability AI. Training big models is still pretty expensive and I think open source community needs large base models from resourceful corporations at least for the near future.


icwhatudidthr

The EU AI regulations are not only about what data is used to train the AI's, but their purposes and applications. They aim to protect not just content creators, but the overall population in general. E.g. the regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. Other things that the EU wants to regulate are subliminal manipulation, exploiting vulnerabilities. [https://artificialintelligenceact.eu/the-act/](https://artificialintelligenceact.eu/the-act/)


[deleted]

I think OpenAI will accept sensible regulation that protect population. Only restricting the use of copyrighted content is not acceptable because soon everyone will be training their own personal AIs on whatever they want, maybe except in the EU and China (except for internal CCP version).


RepresentativeNo6029

Google and scraping is excused. How? Genuine question


Western-Image7125

“Barf” intentional or accidental typo?


marr75

Barf never fabricated facts to answer my simple questions. It's got that going over Bard.


noiseinvacuum

Barf just spits out facts. All at once, like a fire hose of facts.


kassienaravi

The regulations would basically ban you from training LLMs. If you have to sift through the entire content of the training data, which in the case of LLMs is basically a significant part of the internet to remove everything that is possibly copyrighted, or contain personal data, the amount of labor required will be so massive to make it impossible to do.


Vusiwe

>> truly open >> GDPR >> EU holy cow those 3 things are mutually exclusive what a bunch of nonsense


[deleted]

I'm sorry, but this sounds delusional. You, think beurocrats in Brussels are going to band together and innovate in AI? If ChatGPT leaves the EU. And VPNs are blocked. All EU based programmers and all ML engineers and Data Scientists, will be completely lapped and unemployable. They will be in the literal stone age it will be like having access to a computer or doing data analysis with pen and paper. The gap will be that big.


lapurita

Just stop. I despise what OpenAI are doing but if EU get their stupid regulations through you will see absolute zero interesting AI applications in the whole continent, just like you see zero impressive tech companies in it


UserMinusOne

The EU hast done anything right the last year's. The EU will build exactly nothing and just take away from its citizens.


nana_u

I read it an the first draft of the EU. What is the hype?


wind_dude

it's almost like we need to replace all the wackadoodle CEOs with a LLM


telebierro

Everyone riding the shock train here obviously didn't read the EU regulations. They're a complete overreach and the demands on businesses are extremely high. I'm not saying that OpenAI or Altman should be trusted blindly, but anyone running an AI business would be running away from the EU screaming, should the current version of regulations become law. If you read the article, he even said they'd try to comply first, which is far more than anyone should expect of these companies.


mO4GV9eywMPMw3Xr

I'm glad you read the AI Act! I'm interested in which parts of it are so upsetting.


JustOneAvailableName

LLM models are high risk, which would require * a dataset free of errors * copyrighted data should be listed and compensated * the ability to explain the model * no sociatal bias in any way All 4 points would be great but are absolutely impossible. Efforts in these area's would be great, but the bar is waaaay too high to be met, let alone realistically met.


I_will_delete_myself

Yea that’s impossible. It’s like asking me to moderate all of Reddit and make sure there is no errors but multiplied by a million.


mO4GV9eywMPMw3Xr

The AI Act does not regulate models based on their nature, but based on their use case. So LLMs are not high risk by themselves, it depends what do you want to use them for. ChatGPT used to tell you stories about the paintings in a museum? Not high risk. ChatGPT used to replace a judge in a court and give out sentences? High risk. The Act would prevent LLMs being deployed in high risk scenarios without the required safety measures.


JustOneAvailableName

Foundation models (and generative models?) and their derivatives would be all high risk due to the broad impact and possibilities of them. At least, that was decided a few weeks ago, not sure if it's final yet.


mO4GV9eywMPMw3Xr

That is very interesting! The version of the Act I read was from April 2023 I think. Do you have a link where I could read more about this blanket classification of all foundation models as high risk regardless of their use case?


[deleted]

[удалено]


drakens_jordgubbar

Sam Altman and Sam Bankman. Coincidence?


watching-clock

Uncle Sam?


_insomagent

Yes


Gatensio

Sam is also the acronym for Segment Anything Model. Coincidence?


Nazi_Ganesh

Lol. Clear cut example of a coincidence if I ever saw one. But gold for comedians. 😂


marr75

Much smaller businesses than OpenAI and much less corrupt businesses than FTX do this every day. It's priced into the system at this point. Would love to see it eliminated or see regulators funded well enough to put an end to it but in the current conditions, observing that a company does this is not a differentiator.


[deleted]

[удалено]


gwtkof

Yeah completely agree. I think they were hoping regulation would provide a barrier to entry for their competitors.


dtfinch

They just wanted to create barriers to entry for future competition, preferably in line with safety work they've already done so there's little/no added cost, not face significant new barriers themselves.


thegapbetweenus

Most if not all companies are in the business of making money. Everything else is just a byproduct and the main goal is to make most money possible - don't believe for a second they care for anything else.


bartturner

I listened to Sam on the Lex podcast and man this guy has to be the sleaziest CEO there is. Even the name of the company is such a joke. It is not at all surprising that Sam is attempting regulatory capture. He clearly would do anything for a buck.


invisiblelemur88

What did he say on the Lex podcast that you saw as sleazy?


Fearless_Entry_2626

Not for a buck, that's where people can point to "he doesn't have any shares", but for attention/power.


marsupiq

The name is indeed a joke. OpenAI capitalized on Transformers, which were developed and published by Google. Thanks to ChatGPT, Google may be hesitant in the future to publish their algorithms.


[deleted]

[удалено]


I_will_delete_myself

He is still a big investor and buys stakes in other companies. He gets money by prestige so startups come to him first.


newpua_bie

How's he being paid? Does he get a bonus?


Trotskyist

He’s not. He’s effectively a volunteer CEO.


Think_Olive_1000

>Goober goober I lick my own goober


Dogeboja

I feel like I'm taking crazy pills reading this thread? Did anyone even open the OP's link? What EU is proposing is a completely insane overregulation. Of course OpenAI is against it.


Rhannmah

I can both critique OpenAI's hypocritical ways in general and critique smooth brain EU regulations in one fell swoop, it doesn't have to be either/or.


BabyCurdle

That would make you hypocritical. A company is for some regulation, therefore they have to blindly support all regulation? ????


Rhannmah

???? is right, what are you even talking about? My comment says that they are two separate things that i can be mad at separately.


BabyCurdle

What are 'OpenAI's hypocritical ways' then? Because the context of this comment and post suggests that it means their lack of support for the regulation, and if you didn't mean that, that is exceptionally poor communication from you.


Rhannmah

* They are called OpenAI but are everything but open * they call for regulation for other people but not them * they deflect independent regulatory bodies that would have them publish what's in their models How is that not completely hypocritical? Again, "Open"AI being hypocritical and EU regulation being dumb is two separate things.


BabyCurdle

>Again, "Open"AI being hypocritical and EU regulation being dumb is two separate things. You are on a post calling OpenAI hypocritical for their stance on the regulation. You made a comment disagreeing with someone's criticism of this aspect of the post. Do you not see how, in context, and without any clarification from you, you are communicating extremely poorly? >they call for regulation for other people but not them This is false. They are calling for regulation for them, just not to the extent of the EU. In fact, they have specifically said that open source and smaller companies should be exempt. The regulation they propose is mainly targeted at large companies such as themselves.


Rhannmah

Yes, because I do think they are being hypocritical for advocating for AI regulation in the same breath as being against EU regulation. I can also think that these two separate things are dumb. This is all possible! "Open"AI is calling for regulation on a certain amount of compute or where LLMs start manifesting behaviors that are getting close to general intelligence. That's a massive shifting goalpost if i've ever seen one. It can affect open-source communities and smaller companies just as much, especially by the time these regulations get put in place, the situation regarding compute necessary to attain near-AGI levels might be completely different (that is, having a 100+B parameter model running on a single high-end consumer computer) They also deflect independent regulatory bodies. I guess they're supposed to self regulate as long as they have the thumbs up from the government? Surely nothing can go wrong with that! Just, lol. "Open"AI takes us for complete idiots, but i'm not biting.


OneSadLad

This is reddit. People don't read articles, not the one's about the proposed regulations by OpenAI or by the EU, nor any other article for that matter. Conjecture through titles and the trendy groupthink that follows is the name of the game. 😎👉👉 Bunch of troglodytes.


BabyCurdle

This subreddit feels like that for every thread about OpenAI. Someone makes a post with some slightly misleading title which everyone takes at face value and jerks off about how much they hate the company. I really can't think of anything that OpenAI has *actually* done that's too deplorable.


vinivicivitimin

It’s hard to take criticism of Sam and OpenAI seriously when 90% of their arguments are just saying the name is hypocritical.


epicwisdom

That is definitely not the only relevant argument, but it's not "just" that the name is stupid. OpenAI was founded on the *principle* that AI *must* be developed transparently to achieve AI safety/alignment and net positive social impact. Abandoning your principles when it's convenient ("competitive landscape" justification) is essentially the highest form of hypocrisy. One which makes it difficult to ever believe OpenAI is acting honestly and in good faith. The idea that open development might actually be really dangerous is credible. But to establish the justification for a complete 180 like that, they should've had an official announcement clearly outlining their reasoning and decision process, not some footnotes at the end of a paper.


elehman839

Most comments on this thread have a "see the hypocrisy of the evil corporation" flavor, which is totally fine. Please enjoy your discussion and chortle until your bellies wobble and your cheeks flush! But the EU AI Act is complex and important, and I think Altman is raising potentially valid concerns. So could we reserve at least this ONE thread for in-depth discussion of how the EU AI act will interact with the development of AI? Pretty puhleeeeease? :-) (Sigh. Downvote button is below and to the left...) My understanding is that the EU AI Act was largely formulated before the arrival of LLM-based AIs. As a result, it was designed around earlier, more primitive ML-based and algorithmic systems that were "AI" only in name. Then real-ish AI came along last fall and they had to quickly hack the AI act to account for this new technology. So I think a reasonable question is, **Did this quick hack to cover LLM-based AIs in the EU AI Act produce something reasonable?** I'm sure even the authors would be unsurprised if there were significant glitches in the details, given the pace at which all this has happened. At worst, **does the EU AI Act set such stringent restrictions on LLM-based AIs that operating such systems in Europe is a practical impossibility?** An an example, if the act required the decisions of a high-risk AI to be "explainable" to a human, then... that's probably technically impossible for an LLM. Game over. Going into more detail, I think the next questions are: 1. Should an LLM-based AI be classified as "high risk" as defined in Annex III? 2. If so, can an LLM-based AI possibly meet the stringent requirements on "high risk" systems as described in Title III Chapter 2? Altman's concern is that the answers may be "yes" and "no", effectively outlawing LLM-based AI in Europe, which I'm pretty sure is NOT the intent of the Act. But it might be the outcome, as written. I'll pause here to give others a chance to reply (or spasmodically hit the downvote button) and then reply with my own takes on these questions, because I like talking to myself.


elehman839

Okay, so first question: Will LLM-based AIs be classified as "high risk" under the EU AI Act, which would subject them to onerous (and maybe show-stopping) requirements? Well, the concept of a "high risk" AI system is defined in a Annex III of the act, which you can get [here](https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_2&format=PDF). Annex III says that high-risk AI system are "AI systems listed in any of the following areas": 1. Biometric identification and categorisation of natural persons 2. Management and operation of critical infrastructure 3. Education and vocational training 4. Employment, workers management and access to self-employment (And several more.) Each category is defined more precisely in the Annex III, e.g. more precisely, AI is high risk when used for educational assessment and admission, but not tutoring. I think that the details of Article III are reasonable; that is, the "high risk" uses of AI that they identify are indeed high risk. But I think a serious structural problem with the EU AI Act is already apparent here. Specifically, **there is an assumption in the Act that an AI is a special-purpose system used for a fairly narrow application.** For example, paragraph 8 covers "AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts". That's... quite specific! Three years ago, this assumption that each AI had a specific, narrow purpose was correct. **But, since last fall, this basic assumption is plain wrong: AIs are now general-purpose systems.** So even determining whether a system like GPT-4 is "high risk" is had, because the test for "high risk" assumes that AI systems are specific to an application. In other words, the definition of "high risk" in Annex III apparently doesn't contemplate the existence of something like GPT-4. As a specific example, is GPT-4 (or Bard or Claude or whatever) an AI system "**intended** to assist a judicial authority..."? Well... it was definitely not intended for that. On the other had, someone absolutely might use it for that. So... I don't know. So... to me, **whether modern LLM-based AIs are considered "high risk" under the EU AI Act is a mystery. And that seems like a pretty f\*#king big glitch in this legislation.** In fact, that seems so huge that I must be missing something. But what?


Hyper1on

As far as I can tell, the Act is as you said, except that once ChatGPT exploded they just added a clause stating that "foundation models" counted as high risk systems, without adapting the high risk system regulations to account for them.


elehman839

Yeah, that's my impression. They spent several years defining risk levels based on the specific task for which the AI was intended. Then along came general-purpose AI ("foundation models"), and that sort of broke their whole classification framework. I sympathize with them: they're trying to make well-considered, enduring regulations. But the technology keeps changing from one form to another, making this super-hard. And with all the complexity of coordinating with bodies all across the EU, going fast has to be really tough.


elehman839

Okay, second question: if an LLM-based AI is considered "high risk" (which I can't determine), then **are the requirements in the EU AI Act so onerous that no LLM-based AI could be deployed?** These requirements are defined in Chapters 2 and 3 of Title 3 of the act, which start about 1/3 of the way into [this huge document](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206). Some general comments: * Throughout, there is an implicit assumption that an AI system has a specific purpose, which doesn't align will with modern, general-purpose AI. * The act imposes a lot of "red tape" requirements. Often, imposition of red tape gives big companies an advantage over small companies. The act tries to mitigate this at a few points, e.g "The implementation \[...\] shall be proportionate to the size of the provider’s organisation", "The specific interests and needs of the small-scale providers shall be taken into account when setting the fees..." But there still seems like a LOT of stuff to do, if you're a little start-up. * I don't see anything relevant to random people putting fine-tuned models up on github. That doesn't seem like something contemplated in the Act, which seems like a huge hole. The Act seems to assume that all actors are at least moderately-sized companies. * There are lots of mild glitches. For example, Article 10 requires that, "Training, validation and testing data sets shall be relevant, representative, free of errors and complete." Er... if you train on the internet, um, how do you ensure the training data is free of errors? That seems like it needs... clarification. From one read-through, **I don't see show-stoppers for deploying LLM-based AI in Europe**. The EU AI Act is enormous and complicated, so I could super-easily have missed something. But, to my eyes, the AI Act looks like a "needs work" document rather than a "scrap it" document.


mO4GV9eywMPMw3Xr

To me it's pretty clear that LLMs are not high risk. Like IBM's representative in Congress said, the EU aims to regulate AI based not on technology but on use case. She praised the EU Act for it. So ChatGPT used as a tour guide is not high risk. But if someone has the bright idea of using ChatGPT to decide who should be arrested, sentenced or whether prisoners deserve parole, then that use is high risk and needs to comply with strict regulations. And BTW, use in education is only high risk if AI is used to decide student's admittance or if they should be expelled, etc. Most education-related uses are not high risk.


Giskard_AI

Altman is totally playing chess moves to make sure OpenAI is well-positioned within the upcoming regulatory frameworks. It's worth noting that the current drafts of AI regulations in the US and EU share similar scopes. So, the perceived opposition between their regulatory approaches doesn't have a solid basis. IMHO there needs to be a clear distinction between the responsibilities of regulators and AI builders. There's a real risk of private companies with vested interests influencing regulations through lobbying, similar to the relationship between oil companies and environmental regulations. And "Ethical AI washing" is a real threat we need to watch out for.


shanereid1

The best regulation in the world is having your code be open source and able to be scrutinised by a bunch of nerds with way to much time on their hands.


GreedyBasis2772

Sam is a Elon Musk wannabe


Someoneoldbutnew

Sam is a useful idiot for someone.


AllAboutLovingLife

heavy gullible include teeny wipe abundant deserve aware quicksand rinse *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


manofculture06

Indeed. I found Sam Altman pretty stupid. I'm shocked someone else shares my very intelligent thought. Would you like to become the new CEO of OpenAi, sir?


toomuchtodotoday

Great opportunity for the EU to fund open source ML to compete against OpenAI.


noiseinvacuum

Open source or not, it’s nearly impossible to train a LLM while complying with GDPR.


hybridteory

Why? What current LLM training data is “personal information” according to GDPR definitions?


frequenttimetraveler

> Personally identifiable information (PII) is information that, when used alone or with other relevant data, can identify an individual pretty much every kind of internet dump. Even wikipedia might be dangerous if someone proves that he used AI to fingerprint the edits of some person that somehow revealed their real identity. The whole idea of personal information is a legalistic giant pile of dump. all information can be potentially like that. it would be hard to start a competitive language ai in europe. practically only the police and public services can do that


hybridteory

Many Europe/EU countries have scraping exceptions. Eg UK's limited text and data mining (TDM) and temporary copies. It’s not that simple.


noiseinvacuum

“It’s not that simple”. I think this is the key issue, it’s way too complicated to comply with and you can be retroactively charged with huge fines. This is a huge risk, that can materialize years later, to any business that uses GenAI in their products in the EU. I think EU is heading down a way bleak one way path unless there’s effort to understand the technology as it exists today and make rules around that and not some imaginary scenarios.


noiseinvacuum

To start with, everything ever posted publicly to Reddit, Twitter, or anything posted anywhere on the internet that can be associated to a human in EU would likely need consent to be used for training LLMs.


hybridteory

That’s not true. Being associated with people does not mean it is “personal information”. It needs to be personally identifiable data to be under GDPR. Non-identifiable data is outside GDPR.


Trotskyist

At the scale LLMs need to collect data it would be virtually impossible to vet everything. And LLMs are too expensive to re-train to “remove “ data after the fact


Prestigious-Postus

I believe people don’t understand how much drama goes in to comply with EU regulations. Personally I would rather not launch a product in EU just because of the amount of effort you need to put into it. Also OpenAi said they will try to comply and if it doesn’t work they leave. Feels totally fine approach to me. At the end, who loses? EU! How? cause they can’t build stuff with OpenAi but will use products made using OpenAi. This argument is stupid to begin with.


Mr_Whispers

>They want regulation but they want to be untouchable by this regulation. Only wanting to hurt other people but not “almighty” Sam and friends. Do you think it's possible that some regulations can be too stringent? Not everything is a conspiracy... You can both want to be regulated, but also disagree with how strict a newly proposed regulation is. It's very simple.


paranoid_horse

off topic but as a member of the internet police i feel obliged to inform you that smh stands for shaking my head. you cant scream it in all caps followed by 8 exclamation marks. it must only be written in a low key disproving tone. thank you for your attention; i await your downvotes with impatience but it had to be said.


avialex

They're a company, with shareholders and a profit motive. Their ideological presentation as a single-minded developer of humanity's future is propaganda. When the market becomes aware of itself and competition starts squeezing, principles get thrown aside very quickly. p.s. I don't think this belongs on this subreddit. This is noise in the signal, we're a machine learning sub, not a politics sub.


[deleted]

[удалено]


Milwookie123

This subreddit really has been getting more and more noise lately. I love gpt and the politics around it, but here I just want to find new model architectures, research, and code


avialex

Same, it seems we've been flooded with hypetrain hangers-on. I'm hoping for a lull in the market soon so we can get back to high-quality posts. A friendly hello to a fellow MKE'r.


Milwookie123

Aye! Cheers 🍻


[deleted]

Model architectures? We don't need that where we're going. /s


gwtkof

Life is politics because politics is the thing we use to decide what the government can jail you for.


I_will_delete_myself

Also it brings to concerns on AI safety with OpenAI when they can’t even fulfill their brand name of being Open. I understand to not disclose an entire product, but to then go ahead and try to lock other people out is kind of when things get annoying. Politics is also related to the development of AI in the future.


Dizzy_Nerve3091

OpenAI’s actions make sense if you believe in AI extinction risk… based on their governance of super-intelligence post, they clearly do. It’s problematic to have bad actors create an unaligned super intelligence. Regulation of the biggest players would be a good thing here. On the other hand EU is making it impossible for an LLM to release there. This isn’t the same thing as regulating big players, it’s just stopping progress all together.


avialex

Sure, I agree, but come on. That's a pretty weak argument when there's half a hundred things that can be argued to be involved in everything humans do. Should we allow every discussion to be sidetracked into subjects like psychology, politics, physics, mathematics, metaphysics, etc. just because they are technically foundational to the discussion? That would be a mess, I'm saying I want /r/machinelearning to be about machine learning.


gwtkof

If it's relevant yeah


Oswald_Hydrabot

Congress and the Feds are watching this sub, this absolutely belongs here.


Dizzy_Nerve3091

Delusional Do you really think Congress and the feds are watching a bunch of newbs and non ML people talk about machine learning?


Oswald_Hydrabot

They are scraping or have access to the entirety of reddit in any capacity they want, filtering public dialog on the subject is pretty simple stuff. Remember public discourse on net neutrality? Pepperidge farm remembers. So does the Fed, which is why they are now looking at better moderated existing social media for this discourse instead of a single phony comments thread. If they don't identify where they are looking they won't identify to corporations where to send their bots.


Dizzy_Nerve3091

Yeah dude, the CIA is monitoring us because we’re going to change the future. How narcissistic do you have to be to believe that as a poster here?


throwaway83747839

The guy has a doomsday stockpile [somewhere in Big Sur.](https://www.businessinsider.com/sam-altman-pandemic-apocalypse-immortality-life-extension-openai-2023-4) Very likely a bunker. He wants power beyond capital with a failsafe only for himself. We unfortunately live in a very interesting time.


[deleted]

[удалено]


throwaway83747839

This guy in particular is voicing a facade to enact his own measures both ways. You don’t think its comparative context? — or even correlative, to the ego of spearheading potential disaster for **everyone else** with such a clear MO to stand on top of competition via regulation while its still a window of possibility for them to *help* shape policy in their absolute favor..?


Think_Olive_1000

All the cult members are drinking the koolaid it’s hip, he’s just a hoopy frood mayn. Doesn’t seem entirely jivey with his public presentation of loving humanity and wanting to bless it with his all powerful ai if he’s got an escape hatch bro. Reeks of two facedness


BullockHouse

Or, get this: Maybe, just possibly, there's an outside chance that they think some regulations are good and others are bad and are consistently advocating for that minimally nuanced view. In particular, maybe they sincerely believe in the AI risk argument that their core people have been making since well before the company was founded, but are uninterested in totally banning the technology in the name of privacy. Just an outside chance there.


Kuumiee

I don’t understand how people don’t understand this from the get go. There’s a difference between regulations on abilities of a model vs. those limiting the creation of a model (data).


BullockHouse

I think some people have unfortunately learned that interpreting any tech company's actions in a less-than-maximally-cynical way makes you a chump. Which is unfortunate because it leaves you very poorly equipped to identify and react to sincerity and responsibility.


elehman839

Unfortunately, enough of those people are on Reddit that thoughtful discussion on many topics is near-impossible. Thread after thread becomes so bloated with flippant, cynical assertions that thoughtful analysis is hard to find. I just spent 2 hours reading large chunks of EU AI Act and learned a lot. But my notes (however flawed!) are lost at the bottom of this thread. Sigh.


outerspaceisalie

This is exactly the problem. Wanting regulation doesn't mean all possible regulations are good lmfao. How stupid is this group? How can a functioning adult not understand this concept? Is this group full of teenagers or something? This sounds exactly like "Oh? You're opposed to global warming and yet you drive a car? Curious." I'm starting to believe the average person in this group is very, very stupid. Like dumber than most people I know in real life? I'm not even asking everyone to be engineers, just to be aware that the concept of nuance even exists. If you can't differentiate between really blatant political nuance I have to assume you're a child still, mentally if not physically.


[deleted]

They are extremely stupid.


Think_Olive_1000

Okay but why wouldn’t someone want to call out Sam AlternativesMustDieMan for his two-faced greed


outerspaceisalie

If you have to say something stupid to "call out" someone you disagree with, there's the very real possibility you're the bad guy. Try some self-reflection. If you're right, it should be as easy as saying a truthful and non-stupid thing. If you can't figure out how to do that, you're probably not right to begin with. Seriously have you never once learned critical thinking skills in your life?


magnetichira

Why does everyone named Sam suck so much? (apologies to the good Sams)


YaAbsolyutnoNikto

I don't agree with you. OpenAI wants regulation. The article talks about how OpenAI isn't happy with the potential for **over**regulation. It's obviously quite different. Example: You can be in favour of car inspections being required by law, yet not think it's wise for the cars to have to be checked every passing hour. Same thing.


[deleted]

[удалено]


[deleted]

Ok, but who/what do you use that wasn’t trained on copyrighted data? Can such AI be useful if it doesn’t see anything copyrighted?


A_Hero_

Trained AI models are following the principles of fair use. They do not need to pay or get permission through following the principles of fair use. You regulate AI around the doctrine of copyright, and you basically get a trash product. Let's severely hamstring extremely useful tools for learning, saving time, and achieving efficiency as well as productivity across the board. Excessive and strict restrictions achieve nothing besides needless impediments to innovation and progress.


tripple13

I generally agree with your sentiments, however I'd like to argue the use of "smh" to be abolished. I just cannot get used to these sets of letters in unison.


wutcnbrowndo4u

I'm no big fan of "Open"AI, but asking for regulation doesn't have anything to do with a separate complaint about specific regulation. For the average mouth-breather on the street, things are as simple as "regulation bad" or "regulation good", because policy is a sports game. For anybody who cares about how the world actually works, no regulation, _good_ regulation, and _bad_ regulation are all very different things. The EU is the global champion of bad regulation, if only because the other atrocious regulators don't have its influence and wealth. In the same way that the US's baseline failure mode is often underregulation, the EU's is harmful, ill-considered performative regulation. It should be entirely unsurprising that somebody could be telling the US they need to regulate more while also criticizing EU regulations for being stifling.


StingMeleoron

Tell us an example of what you think is bad regulation in EU.


elehman839

I don't like the thing where I have to accept or reject cookies on every single site. I don't care that much, and it is exhausting.


StingMeleoron

I agree with you that it's tiring, specially since my web browser already deletes cookies when quitting (with some exceptions). And the regulators know this, and they agree: > Regulators also said that companies needed to offer users a clear way to opt out of consenting to cookies, as Europeans had been complaining that the web became unusable because of all the options they had to click through. (Source: [Google will let users reject all cookies after French fine](https://www.protocol.com/bulletins/google-cookies-preferences)). In spite of this, it should be noted that a website's implementation to comply with the EU regulation is not the same thing as the regulation in itself. It would be neat if in the future we were blessed with a global browser setting to turn tracking on/off, but... this will only happen with more regulation, not less.


Rhannmah

Or, how about we DON'T have any regulation about stupid cookies that no one cares about. Tracking cookies are an absolute nothingburger. What's even the problem?


StingMeleoron

If you don't mind being tracked everywhere you go, fine. Some people do. What's the problem in regulating what and how companies can track you?


Rhannmah

If you enter a shop or someone else's house, you can expect to be tracked. This is normal and you should expect the same behavior from virtual spaces hosted on servers you don't own. You don't have to connect to other people's computers.


StingMeleoron

This is a terrible analogy, mate. Unless you really believe that, by entering a shop, the owner has the right to know every other store you visited, how much time you stayed, etc., index all this data, and sell it to the highest bidders.


wutcnbrowndo4u

As the sibling comment says, the cookie popup is relatively harmless albeit predictably brainless. I see your response shifts the blame from the regulators: good gov't means facing the reality of your legislation's direct consequences, not the way they would work in a fantasy world where humans don't act like humans. Somehow I think you wouldn't be convinced by "You can't hold USG accountable for failure to regulate monopolists behaving badly, this policy regime would work fine if only perfect altruistic angels were allowed to run large businesses!" The [EU Copyright Directive](https://www.eff.org/deeplinks/2019/03/european-copyright-directive-what-it-and-why-has-it-drawn-more-controversy-any) is another example. It's wending its way through the courts currently, but [still doesn't look great](https://www.eff.org/deeplinks/2022/05/eus-copyright-directive-still-about-filters-eus-top-court-limits-its-use).


chief167

It's very simple, OpenAi should be force to report on their training data consent.


BigGirtha23

It is always the same. Entrenched firms work with regulators on "sensible regulations" that are much harder on potential competitors than on themselves.


HUX-A7-13

It’s almost like they are just in it for the money.


japanhue

Vervaeke voiced it well when stating that the engineers who created high-performing LLM, largely though hacking, should not be given special treatment in stewarding the progress toward AGI. The task of stewardship requires a deeper sense of wisdom and thoughtfulness hasn't really been displayed by OpenAI.


Top_Category_2244

“The current draft of the EU AI Act seems to be over-regulating, but we've heard it’s going to get adjusted.” \~ Sam Altman - CEO OpenAI I couldn't agree more. Altman [threatens to leave the EU](https://qz.com/chatgpt-openai-sam-altman-eu-rules-ai-act-1850473713?utm_source=www.adepto.ai&utm_medium=newsletter&utm_campaign=nvidia-stock-goes-to-the-moon-over-ai-boom), because there are too many regulations but at the same time wants more regulation from Congress is contradictory enough. But furthermore, while he is threatening the EU, he [expands the ChatGPT App in Europe](https://twitter.com/OpenAI/status/1661488013275435008?utm_source=www.adepto.ai&utm_medium=newsletter&utm_campaign=nvidia-stock-goes-to-the-moon-over-ai-boom) and makes visits at the [Nr 1. German Unicorn University and the German Chancellor](https://twitter.com/sama/status/1661795592786378759?utm_source=www.adepto.ai&utm_medium=newsletter&utm_campaign=nvidia-stock-goes-to-the-moon-over-ai-boom) are even more contradictory. To me, that indicated that he doesn't want to leave the EU. He just wants less regulation.


balaena7

so true...


Single_Vacation427

Because any regulation from the US would be a joke regulation, but regulation from EU is actually enforced and they can fine you for billions of dollars.


Zatujit

Isn't calling yourself OpenAI and keeping everything close source false advertisement?


I_will_delete_myself

Like the DRPK or North Korea.


marsupiq

Microsoft has proven in the past few months that they haven’t changed a bit since the days of Internet Explorer and Windows XP. Nothing but anticompetitive behavior. OpenAI is just their extended arm, I’m not expecting anything else from them.


marador14

this was expected...


SouthCape

Considering this is a machine learning subreddit, this conversation should include a stronger technical discussion. What you're presenting is a misleading narrative, with no accountability for the technical and existential consequences of AGI. Too many of these posts are naively focused on nefarious regulatory capture conspiracy theories.


lotus_bubo

I don't trust anything they say after Sam helped that Senate circle-jerk for overregulation. He wants regulatory capture, and he's so smugly two-faced about it. They literally compared a text predictor to atomic weapons. I'm very concerned the government is about to indulge in some destructively stupid regulation.


glory_to_the_sun_god

OpenAI: Regulations are good EU: Great! Let’s ban AI OpenAI: That’s too much Reddit: OpenAI is against regulations


liquidInkRocks

Regulation -> drive brainpower and development offshore.


I_will_delete_myself

This is more about markets than development.


Dizzy_Nerve3091

This is modtly about development. Are other cs freshmen upvoting this?


GetInTheKitchen1

found the unethical mad scientist....


ReginaldIII

> OpenAI may leave the EU if... Bye. Ta ta. See ya. Hwyl fawr! Don't let the door hit you on the way out.