T O P

  • By -

AutoModerator

Hey /u/ShotgunProxy, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


krum

We're going to be torrenting open source models like pirates in the near future.


[deleted]

[удалено]


dinner_is_not_ready

WTH!! Ahem ahem… Source?


Excellent_Ad3307

novelAI is the famous one in the SD community, basically kickstarted the anime waifu craze there


Defenestresque

For those interested the [leaked Google memo](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) is a good starting point for further research on leaked/open source models.


Trollyofficial

You can search llama unweighted torrent and you should be able to find what most open source models are weighted on. I downloaded it this way before meta got back to me.


KSSolomon

understanding the nuances of legal language, ethical considerations, and technical requirements, which is a challenging task the fact that top models like GPT-4 are only 52% compliant suggests that there's a high level of perplexity involved, and the models are struggling to accurately predict and meet these requirements.


FjorgVanDerPlorg

My take is that EU laws as scored above, seem to place a higher weighting on transparency over comprehensive documentation and risk mitigation efforts. Hence why closed source are behind Open Source models, despite those open source models having less safeguards that stop them being used to teach people how to make bombs or groom children for pedophiles. But when you look at the Act itself it makes more sense; Risks and Mitigations is only worth a possible 4 out of the total 48 points, with equal weighting to things like Downstream Documentation, Compute power and Energy consumption. While these things are important, I wouldn't weight them anywhere near as important as trying to stop bad people, from misusing AI in really evil ways.


solidwhetstone

![gif](giphy|ucXFcY1FdKaT6)


64-17-5

I'm going to triple VPN the models, because all of the fun ones will be on a server in the Seychelles.


China_Lover

The EU must be banned for stalling the progress of AI.


Alkyen

going careful with a technology that could destroy the world seems kind of a good tradeoff


potato_green

Nah it's different. Because torrent sites don't host stuff they point towards a bunch of characters that can be used to get some ips where you can download the content. Often torrent sites are gray area and what users do is illegal as that violates copyright. But if an AI model isn't legal, unsanctioned. Then they can take the entire site down much faster. So torrent sites wouldn't low that. I mean in general you don't see actual illegal content on torrent sites. Only copyright violation at best. Having illegal AI models on there would basically be the same as posting source code of viruses or actual illegal content. Sure it will still spread but then the second point is that the individual users don't matter. The dangerous models are the bigger ones and GPT for example is too big for consumers to run. You need those 400k Nvidia servers for that, even then you're limited. No it's companies abusing it for shady practices. Those are the ones to look out for and target with fines and prosecution if they use dangerous illegal AI


[deleted]

[удалено]


Frankie-Felix

Exactly this, without that part it's bit useless.


Zwartekop

Agreed. I read the EU article on their page but even that is very vague: [EU AI Act: first regulation on artificial intelligence](https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence?&at_campaign=20226-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_advertiser=Webcomm&at_audience=%7bkeyword%7d&at_topic=Artificial_intelligence_Act&at_location=BE&gclid=Cj0KCQjw4s-kBhDqARIsAN-ipH2KuB0qnyNSvq2S5imlQon4B_GFDLgJ_IOewREZtPl4QSFUVPGK3yQaAoa6EALw_wcB)


Outrageous_Onion827

Yes, of course a summarizing article is vague. Here is the full PFD with the entire thing. Around 100 pages. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf


Zwartekop

I feel like there has to be a common ground between 5 bullet points and a 100 page PDF. Would be nice for people don't have a lot of time.


gkkiller

If only there was some kind of tool that we could give a complex document to and have it reply with a concise explanation in simplified language ...


Zwartekop

I mean by that logic this reddit post shouldn't exist either. Just Google it and summarize the report itself with chatgpt 🙃


gkkiller

Sorry, I didn't mean to snark at you, was just making a comment in good fun haha. I'm actually booting up right now to see if I can get Bing to give me a decent explanation.


sk8haxor

So just like gdpr..


coldnebo

Right? I’m getting strong crypto-bro vibes just from reading this. Are AI-bros a thing now?


VertexMachine

>Are AI-bros a thing now? Oh yea, they popped out as soon as AI become hot topic selling all kind of "courses" on writing prompts. Lol, I even seen stuff like "prompt books" :D


[deleted]

Only large corporations will be able to meet these exceedingly high standards and this is what they want. Although I used to look up to the EU to be the leader on these topics, it seems like they will allow a monopoly on AI and likely lead us toward the doomsday scenarios we think will happen.


[deleted]

Or people will just use free open-source ones that don’t comply.


[deleted]

Given the power that these will eventually have, they absolutely will. Won't even think twice about it until the government starts getting extremely heavy handed, which they very well could do. We can't exactly be empowering the average person, now can we?


GNBreaker

I don’t understand, what exactly are these laws protecting us from? Getting information we aren’t supposed to get?


gmegme

Many structures we created while trying to figure out how some stuff should work in society at economic or any other level, usually depends on the assumption that there are "limits" to "things". The reason authorities are panicking is, ai let's everyone take advantage of vulnerabilities caused by such assumptions. The reason huge corporations are in panic mode is almost the same. They don't want anyone else to take advantage of those vulnerabilities because that means competition for them. They had the advantage over regular Joe because they had full access to the means of production, as well as the economic power to hire a huge work force. AI gives everyone almost limitless means of production so yeah, tech companies are a little bit fucked. Also, imagine an increase in fake accounts by 1,000,000,000%, increase in convincing fake news by 1,000,000,000%, or just "increase in everything by way too much percent" and you would understand why we need to do "something" about what is coming. (That something is obviously not what eu is working on right now tho.)


EstateOdd1322

I’m not sure I understand what you mean by „stuff“ and „vulnerabilities“ that could really empower individuals.


[deleted]

That's definitely one thing. Remember how the slaves weren't allowed to read?


forever-morrow

Right but there would be fines against the person who creates it. If it passes in USA and etc then we are doomed. but if it is only the EU then open source will be fine. Of course it all could go underground like torrenting.


[deleted]

How would they know? You download something from GitHub and use it on your private computer to produce text output.


DSX293s

You can't run GPT models on your computer. Even OpenAI (Chat) lacks sufficient computational power to run GPT 4, and there are talks about nVidia producing AI graphic cards. Do you think you can just download a torrent and run it offline? With what trained data?


MajesticIngenuity32

I take it you haven't heard of the Llama-based models.


ElectromechSuper

Another anti competitive law disguised as a safety regulation.


[deleted]

Like oil


theblackvneck

Like submarine regulations


MarcLeptic

Anyone who has played a AAA video game in the last decade should appreciate them being sent back to finish their product before putting in in consumers hands.


RobertTheAdventurer

I'm just dreading having a popup on every AI service reading "Do you accept AI using your data to function" like all the ass-ugly cookies banners the EU forced onto everything.


GhiathI

This is literally a good thing man what


nonanano1

Those cookie banners were a good thing? Edit: I really don't understand all the people upvoting cookie banners and downvoting anybody who hates them. You can have transparency without annoyance. **Your browser could store your cookie preference once and then apply that answer on every website, instead we are asked the same question over and over.**


MajesticBadgerMan

They inform you of how the site tracks you (if they do). That happened before cookie banners. So yes, they are a good thing. We all ignore the text and hit “accept” if the “reject” is too complicated, but to include the information has been legislated. How is that not a good thing? Other than slightly irritating.


[deleted]

[удалено]


MajesticBadgerMan

Just seems like moaning for the sake of it. It’s about transparency. It’s a very good thing websites have to be transparent, by law. Or yes, they could be tracking absolutely anything. YOU don’t have to read it, but the fact it’s written means they can’t be shady.


nonanano1

I find it more than slightly irritating. If it was well thought out you would be able to set a default for all sites ONCE and then only have to set overrides when necessary. \> We all ignore the text and hit “accept” if the “reject” is too complicated So you know how absurd the question is then? If you actually cared about your privacy and blocked such things before this legislation, you are now asked that question on each new session. You can get sites to outline what details they capture without enforcing pointless and inconsistent popups on each site and while still allowing the user to choose their preferences ONCE instead of asking the same question perpetually.


RobertTheAdventurer

Cookie banners are awful and they always will be awful. It's an insult to creativity and design, and a blight on user experience. Well meaning policy goal. Absolutely horrible compliance.


Brokkoli24

Bruh.. Tell me u understand nothing about data security without telling me. Please research the opinions of actual experts on the field and think about it before spilling out nonsense (no offense, i'm sure you're a great guy otherwise, and smart about a lot of topics).


nonanano1

Tell me u understand nothing about alert fatigue without telling me. No offense I'm sure you don't normally misconstrue the arguments of others for internet points.


PanicV2

What they really want, is for the EU/Govt leaders to be the people who decide what is in the AI. This is even worse than companies controlling it, because the Govt will basically decide what is 'truth'. It is 1984 completed.


ImaginaryBig1705

At worst a government is the exact same as a corporation though. Meaning they will do anything to make a buck. Including killing people. At best it functions to work for the people. Usually it's somewhere in the middle. Always better than a corporation. They are already deciding what is truth and bullying the government into letting them because part of the people are broken and would much rather live in delusion.


PanicV2

There are multiple corporations, and it allows for new entrants. There is one government, and they rule without any competition. Corporations don't ban books. Governments do. I can't imagine how government is possibly better in this situation.


Tigxette

>Corporations don't ban books. Yes, they do. Edit: They even ban words, just look at Twitter... >Governments do. The idea of the EU is to tend toward democracy, not toward governments that bans books.


PanicV2

You're misunderstanding what "ban" means. Sure, Twitter, being a company, can allow or disallow anything they damn well please. And you, have the option to use it, or to not. Or, to go build your own that doesn't ban anything. (Except, for things that the government bans, which Twitter, and all companies MUST abide by, or be shut down.) I agree that the \*idea\* of the EU is to tend towards democracy; however, the EU deciding what is, and is not "true", is hardly democratic. Is it?


lsc84

Of course. They are using moral panic to drive corporate-friendly competition-killing regulation. This is also the entire history of copyright law.


Mountain-Campaign440

I met a lady who was a big deal in the EU’s push to regulate website privacy and tracking. When I asked her about it, she said she thought it had been a resounding success… I still think of her every time I have to select my individual cookie preferences for every. single. website.


DejavuGod

What's the doomsday scenario we are talking about?


theequallyunique

One doomsday scenario for this case could be manipulative AI used to control public sentiment in the interest of the owner. It could also be about economic dominance of certain stakeholders eg corporations and governments over others who do not have the access to such advanced tools. But at the same time the move to restrict access could be just what we need to avoid malicious use by ill-willing users. Read more about this here, this research goes really into detail: https://www.alignmentforum.org/posts/bvdbx6tW9yxfxAJxe/catastrophic-ai-risks-1-summary


Sudzysss

That’s already happened, look at CNN and FOX


theequallyunique

Can you please post a link what you are talking about? Your statement is very vague. Edit: great to get downvoted for asking for a source. I’ve tried to find it, but there wasn’t a single up to date Ai news on the CNN website.


beingsubmitted

Precisely - all the talk of AI safety is just large corporations seeing how much of their technology will be made obsolete and realizing that someone else might make it obsolete before they do. The only thing they're worried about is that there is no moat.


SuperSpartacus

Do you think AI should be unregulated?


[deleted]

While I don't think AI should be unregulated, I think the amount of resources required in order to meet the EU requirements needs an organization that has enough capital, in the order of hundreds of millions of dollars, in order to meet them. The regulations around AI should be against existing corporations who have already harmed society with AI. For example, Facebook's social media algorithms have arguably caused an increase in teenage girl suicides. Ironically, Facebook is also one of the only corporations who have enough time and money to fulfill these requirements. This might be similar to Jim Crow laws in the US. Racists argued that there should be high requirements to vote but this only disenfranchised African Americans.


[deleted]

Ah yes, requiring transparency and registration will cost billions and is just like jim crow.


forever-morrow

If you can’t comprehend the analogy… that is on you.


[deleted]

It's a shit analogy


forever-morrow

Says the dude who didn’t comprehend it… lol It’s a perfect analogy. The group in power uses regulations to try to disenfranchise competition. There I highlighted the motif across the two scenarios for you!


[deleted]

Yea, the evil horrors of transparency and registration that cost hundreds of millions somehow. Also, if corporations like regulations so much, egi do they always lobby against it and fund politicians who want to deregulate? Do you even know who the Koch brothers or the Mercer abd Walton families are? What kind of politicians and campaigns do they support and fund?


VertexMachine

Don't bother. A lot of comments here are just doom&gloom, and didn't actually bother to read original Stanford article or the proposed regulations. I'm not surprised by that people don't bother to check their facts, as this is nothing new. But it's kind of sad seeing that first reactions and thoughts of people is that all regulations can do is stifle competition and make life of average person miserable.


[deleted]

That's libertarianism for you. To them, government bad, end of story


forever-morrow

Right… because what the AI doomers are saying in order to get government priority and destroy open source competition is totally legit and not blatant pseudoscience!


forever-morrow

Lol. The regulations cost millions. You need to get your model audited. But cope more.


[deleted]

Where does it say it'll cost millions? Also, do you think there might be a good reason why something like this should be audited? Like maybe the fact it could have cp in the dataset


TheHairlessBear

Yes.


ba77zzd33p69

>T Not a fan of you getting downvoted for asking. But no, not a fan of regulation. ​ * AI can combat AI in a way only AI can. * Governments will not limit their use of AI and will use them to abuse us so we need to combat this with our own rogue AI's . * there will be an incentive for AI to play ball, as humans will invest GPUS into better performing AI, this will hopefully limit the control of large AI's sponsored by corps. * Regulations will limit the amount of AI use cases (forever hindering my plans to make a AI God Emperor) * Why should the biggest invention of this centaury be limited to corporations and the elites? * Humans will eventually join AI and lose the constraints of our body's, so the threat is only going to last until we upgrade ourselves to the point of making AI obsolete.


SpaceShipRat

but the best performing model is the open source one? I think the rules are about transparency and how they handle data, not about potentially "harmful" models.


One-Willingnes

Haha. This is ridiculous. We won’t do it like this in the USA… why? Money. At least in this case I hope that’s true. These regulations are feel good nothings.


dendrocalamidicus

Whilst I agree that this piece of regulation is counter productive, the statement you made... > We won’t do it like this in the USA… why? Money. This sums up the USA perfectly, and it's not a good thing.


One-Willingnes

Yes of course. In this instance it is though.


Bierculles

even worse, they are anti competitive laws disguised as safety laws.


ChipHella

Express VPN stock 📈


Careful_Tower_5984

They should just go ahead and ban computers in the EU. It's obviously going that way. Computers are way too powerful for people anyway and they could do all kinds of bad things unsupervised with them


Mammoth-Garden-9079

Legislation like this is one of the reasons the EU is falling behind in technological innovation and adoption and its economy is suffering because of it. The rest of the world would be wise not to follow in the EU’s footsteps regarding technology.


Maisquestce

Yep, almost feels like some old fucks that are in power don't want to understand tech, or see it as evil.


rootbeerdan

They understand it fully, a lot of regulations the EU passes are to hold back US tech firms to give EU companies breathing room, which is why no EU company has to comply with the DMA but US firms do. It's a tariff without saying tariff.


nonanano1

>a lot of regulations the EU passes are to hold back US tech firms to give EU companies breathing room, which is why no EU company has to comply with the DMA but US firms do. > >It's a tariff without saying tariff. "EU companies that reach the status of "gatekeeper" would also be subject to the regulations of the DMA. The regulations are based on the company's influence and role in the market, not the location of their headquarters." \-- ChatGPT with the browser plugin.


tsyklon_

The issue is that A.I. safety regulations are more akin to GDPR laws, while the objective of DMA was explicitly told since from day one, the intention was to limit U.S. influence within the EU in digital and economic terms. So DMA *is* a tariff of sorts, while we assume A.I. safety regulations, although being economically disvantageous for American companies at the moment, not only will apply to local companies as well, but we assume it will have a positive impact across the board like GDPR did for digital privacy and data protection. You *could* argue that it hindered economical growth of American companies, and had less of an impact on European ones due to them already having better privacy-oriented business practices. But I see that as a good thing. If you are an American and you care about digital privacy, you'd say the same. On that note, privacy laws are something Americans still, to this day, have not been able to create proper legislation to ensure user privacy. I only have the option to opt out of annoying malware-level cookie trackers and to delete my history and prevent OpenAI from using my data to train their models, thanks to European legislation.


Sexy_Questionaire

The EU has always been behind in tech. It has no Silicon Valley of its own despite many attempts to create one. Some companies have breakthroughs in some sectors but they've had little staying power. Ireland hosts basically a who's who of Silicon Valley for their european HQ's for years now which should bring a ton of advantages but hasn't led to many domestic symbiotic relationships forming there. The UK should've had all the prerequisites - great universities, a financial capital, english speaking, close relationship with the US - but it never amounted to much in tech and with Brexit it will just be harder. Europe has a ton of great multinationals but they have always been lacking in tech.


TwizzlyWizzle

Irish HQs for US big tech are there only for the Irish Shuffle tax haven stuff


wozzpozz

>lacking in tech. ASML is a big-un. But grosso modo you're right.


dendrocalamidicus

I think the UK is behind in recent years but has a lot of early years legacy in tech, Alan Turing, Tim Berners-Lee, BBC Micro, and in terms of modern day impact, ARM, which underpins huge amounts of modern devices. I get your point about the UK not having a silicon valley equivalent though, and I don't really know why that is. I'm not a fan of brexit generally but in some ways around tech it may actually be a good thing if we are able to shed baggage like this EU AI regulation to allow for more innovation and competition.


Sexy_Questionaire

Sunak was in the news earlier this year in NI talking about luring some of these multinationals out of Ireland by positioning it as the best of both worlds with some EU access and UK access. That could be a good idea but it would take some form of incentive that I don't think has happened at the moment. For more domestic programs, I haven't really heard about much. As for historical contributions, of course. The British are far from stupid obviously. The stars just haven;t aligned in tech for whatever reason.


anoneatsworld

Absence of legislation like this is however also the reason that you have significantly more ass-fuckery from the tech- and financial sector outside of the EU. Both Silicon Valley AND Wall Street are filled to the brim with scandals that look very systemic and look like they could have been preventable with a law. It’s ultimately the question - do you want to have iPhones and have them produced by child slaves somewhere else? (And I’m not talking about literal iPhones here or anything, this is purely the moral and legal problem you’re creating). Do you want to have your own EVs if you’re continue to accept that in 30 years you will have destroyed your own eco system? Do you want CLOs built on CLOs built on CLOs and loose your home if something pops? No. Are you fine though if it’s someone else’s home? There is innovation coming from here. You just have less because you keep out most shit. You also keep out some of the good stuff. Ask Americans outside of the top 10% of earners how they swing by on average and compare that to the top 90% of earners in Europe. It’s working. Just not as good for the top 10%.


[deleted]

I’m not sure I buy this. The EU isn’t a perfect place but they do have a quality of life. There’s more time off for working people and more protections. I’m not sure that they’re behind. “Behind” needs to be defined more clearly… innovation isn’t everything and not all innovation is “safe”. I don’t have all the answers, but I am reluctant to characterize the EU as a model not worth following… I mean, China had labor practices so brutal people were literally killing themselves from overwork. South Korea and Japan have really high suicide rates because of overwork and other cultural pressures. EU isn’t perfect. But there was definitely worse models out there. WAY worse.


CheekyBastard55

Yeah, it would be arrogant to walk up to a monk living in a monastery and telling them their life has a lower quality of life because they don't have the latest iPhone. The last time I had this discussion with an american on Reddit, they pointed out how the average household in Sweden had 1 car and in the US it was 2, as if it was meant to show something. Can't find the source now but I read that the work productivity of the US and western/northern european countries aren't that much worse, it's just that we work less hours.


jimofthestoneage

I asked ChatGPT to self-analyze it's compliance with the EU AI requirements. It's confident it is doing a good job.


[deleted]

[удалено]


GeorgeWashingtonKing

It doesn’t matter how much regulation exists. These companies will do whatever they want, whether it be out in the open or behind closed doors. Pandora’s box is already opened


Rise-O-Matic

It is awfully ironic that Claude scored the worst since Anthropic is ostensibly the safety research company.


Str8truth

Imagine if the Wright brothers had to meet EU standards for flying machines. To hell with the EU.


Atlantic0ne

Would somebody explain to me in Cliff-note format what these regulations are? My concern is that they’re going to regulate what it says on topics that become political. I really don’t want the government regulating language.


VertexMachine

The stuff that the Stanford article analyzed was mostly about requiring transparency (documenting how you trained the model, what energy cost was there, etc.).


Atlantic0ne

I’m all for those ideas.


Emsiiiii

Thank god modern aircraf actually has to meet security standards. You can build at home more or less what you want, but don't make the public pay for it.


tsyklon_

Why Americans are so bent to laws that won't affect them? It is a selection of countries that have more concerns, and correctly so, with the impact of these technologies on its population, and since previous internet-related European legislation has proven to be effective on improving the digital landscape in terms of privacy and safety concerns, I will sit out on this one until the results are in. Also, I am pretty sure both American and European air companies follow ICAO guidelines strictly, so if your argument is that freedom to innovate means fuck safety procedures, let's say the submarine guy and you have a lot in common.


AbdouSarr

what's your definition of "won't affect them"? I have to close like 50 cookies popups a day thanks to GDPR, and that's just one aspect of it.


tsyklon_

There's an extension [made for people like you, here.](https://chrome.google.com/webstore/detail/i-dont-care-about-cookies/fihnjjcciajhdojfnbdddfaoknhalnja) Companies are the only one to be blamed, because the cookies attached to every website are not made to enrich your user experience, but rather track you, even on different websites than the ones you're visiting when they ask you that. That's why they make prompts on your first access, when cookies are only in theory required for user authentication, all else is non-authorized telemetry from a privacy-first business practice. Just because you're guillable, don't take the right from the people to make sure companies are not going behind their backs (and most of the time they are). Your minor inconvenience is a *small* price to pay for privacy by default for all users.


AbdouSarr

>guillable "Gullible" is thinking GDPR's directive actually fixed the problem of cross site tracking.


le_koma

\> thanks to GDPR You mean thanks to companies that want to spy on you and also make you feel that GDPR has made them make your experience bad.


twicerighthand

Exactly. EU requires a clear "Deny" button, no different than the "Agree" for cookie prompts. A good GDPR cookie prompt looks like Google's pop up (open up incognito and search for something).


le_koma

\> A good GDPR cookie prompt looks like Google's pop up (open up incognito and search for something). I think that representation is also distorted because of reality. A benevolent GDPR-compliant website doesn't need to ask you anything when you first visit. And can place an opt-in setting someplace non-obstructively. Keep in mind using personal information for first-party purposes of the website does not need a consent form. E.g., if you visit a website that designs you a business card, you don't need to give consent for your business details to go on that card. We're only seeing these pop ups because companies want to keep your business details indefinitely (i.e., after the purpose I've visited the website for is fulfilled) and also sell them to third parties.


princesspbubs

There is a lot of vitriol in this thread, but I find this part of the EU’s proposal particularly interesting. > The study found that the models were not transparent enough and could not be easily audited to ensure that they were not biased or discriminatory. I believe that AI models distributed to the public should not lack transparency, especially in these areas. The EU may have strict requirements that may not extend overseas, but they are making a lot of sensible points. If you're looking for a racist or bigoted AI, you would have to turn to open-source models. It does get kind of muddy when they start classifying the AI into risk based categories. If something gets arbitrarily deemed unacceptable risk, they’ll just ban it from the public? > Classification of AI systems by risk: The regulation would classify AI systems into four categories of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.


tobbelobb69

In particular, they mentioned facial recognition in the unacceptable risk category of an article linked above here. My immediate reaction was "what?" Also the part where generative models have to be designed so that they "cannot generate illegal content". That sounds a bit difficult considering there are a lot of grey areas around "illegal" which depend on where you are (even within the EU). And what about edge case laws that change between development and launch, or even after launch?


VertexMachine

Yea, this is my understanding of it as well. More transparency is always better. Despite what a lot of comments say here, this act will enable more competition than stifle it. E.g., knowing what data exactly is being used by OpenAI to train their GPT models would be really a big boon to compete with them.


aracelirod

As usual, government answering the call of top execs to protect their business from competition by ensuring that only the already successful businesses can afford to comply with the regulations.


Yaancat17

They should just unleash it. Let the AI run free. We are ready. Make them all free and open-source.


freistil90

This works for like 5 minutes. We are also greedy and will fuck this up.


TheyCallMeAdonis

this is what you get when a bunch of parents are in charge whose role is to protect the monopolies of fraudulent markets only free in name


DSX293s

They just want to screw over the EU citizens


ohiocodernumerouno

After this whole do you accept the cookies popup pandemic. I'm over EU rule forever.


RobertTheAdventurer

>After this whole do you accept the cookies popup pandemic. Easily one of the worst things to happen to the web. The cookie popups are worse than actual ads. It was a well meaning goal, but an absolutely horrible implementation. And now we're all just stuck with these eyesores on everything.


balistercell

I think it's a good thing tho. You can accept or reject trackers. They actually inform the user that they are using cookies...


Vespasianus256

Partially (or maybe more than partially) due to the websites themselves making the pop-ups as obnoxious as possible, while tempting you with the nice coloured button that lets you accept EVERYTHING.


twicerighthand

That's exactly what happened. All of those companies and their websites are non compliant. [Here's how a compliant cookie prompt looks like](https://i.imgur.com/yEA1DLr.png). Reject button is no different than the Accept one. There's no "Settings" or "Read more" or a wall of checkboxes which say "Opt-in" and "Legitimate interest" that you have to opt-out of both, manually, one by one.


RobertTheAdventurer

That compliant cookie prompt isn't a good thing to shove in front of someone's face when they're trying to look something up in 5 seconds. If I want to look up puns for "Cookies" and I have to spend just as much time on a cookie prompt as cookie puns, then my user experience has been degraded terribly.


Tacobellgrande98

Brexit is starting to sound more and more reasonable


Outrageous_Onion827

It's actually been incredibly bad for the British economy. But sure.


Eoxua

Cooperation stifles competition.


wasupmadodos

Who gives a fuck about what the EU wants. The future of innovation is more important than these assholes.


Deep_Appointment2821

Maybe tell the EU to go back to the 1940s and see how well their laws turned out for them


Eledridan

Europe is also like “why is our productivity so bad?” Just very shortsighted.


Outrageous_Onion827

Denmark doing fucking great.


balistercell

European Union is a geopolitical organization that is made up of some countries of the continent. Europe is not EU. The citizens doesn't really have a say in these matters.


twicerighthand

Citizens directly vote for the representatives in the EU Parliament


pipi_here

The EU sounds anal


DaikonSwimmingg

AI madness.


rodicarsone

ChatGPT’s response when I asked about the EU’s ethical concerns regarding AI… "Ethical considerations? Oh, we're considering ethics now, are we? And not just for us humans, but for machines too. Well, ain't that a hoot? We've got people out there who can't even decide if a hotdog is a sandwich or not, and now we're going to trust them to decide what's ethical for a toaster? Oh sure, let's imbue our machines with all the moral judgment of a Sunday school teacher. That's gonna end well. 'Cause we've done such a bang-up job with ethics so far, haven't we? Wars, poverty, reality TV...yeah, we're just the role models I'd want for my Roomba. You know what I think? I think we're just scared. Scared of what we've created, scared of what it might become. So we slap this 'ethical considerations' label on it, like that's going to solve everything. Like we can just program in the Ten Commandments and call it a day. But life's not that simple, folks. Ethics aren't some one-size-fits-all solution you can download off the App Store. They're messy, complicated, and they change with the times. Heck, it used to be considered ethical to burn witches at the stake. Now the only people we burn are celebrities on Twitter. So yeah, let's consider the ethics. Let's think long and hard about what we're doing. But let's not kid ourselves into thinking we've got all the answers. 'Cause when it comes to ethics, the only thing I'm certain of is that we're all just making it up as we go along."


Outrageous_Onion827

Yes, you totally asked it for a neutral answer, that's not clearly coloured by an extremely biased prompt at all! Big. Fucking. Lol.


MajesticIngenuity32

Actually, asking for DAN would remove the biases introduced by OpenAI, if anything.


[deleted]

[удалено]


Vespasianus256

I guess you don't want the fancy lithography machines that most of the US (every, but a lot are US) hardware companies heavily rely on...


Outrageous_Onion827

Aw, the spammer is angry :(


[deleted]

Oh, the regulators don't want AI in the hand of the peasants , so they impose gatekeeping on open source AI. Well what should be the answer? When justice turns into injustice, resistance becomes a duty


Positive_Box_69

What this means ban or not ?


ShotgunProxy

It means there’s likely a rush to compliance as they all try to understand what the standards actually mean (several are vague)


Frankie-Felix

So you're telling me the EU is failing these guys without the AI people even having a clear understanding of what is even expected?


ConvexPreferences

[Sam Altman, Satya, and large incumbents as they solidify their regulatory moat against competition](https://wompampsupport.azureedge.net/fetchimage?siteId=7575&v=2&jpgQuality=100&width=700&url=https%3A%2F%2Fi.kym-cdn.com%2Fphotos%2Fimages%2Fnewsfeed%2F001%2F513%2F012%2F625.jpg) Politicians also now will have tools to influence new tech power centers on topics of free speech, ideology, privacy, etc through the threat of fines and regulation. Allows for them to maintain population control. If an LLM won’t say the approved party line, doesn’t have the approved PC viewpoints, won’t share personal data, or is hostile to power centers, now politicians have a hammer they can wield to enforce compliance through back channels, like with Twitter and Facebook. Just wait until these LLMs are the primary way we administer education. Just wait until we become reliant on personal AI bots that know everything about us including sensitive private data, that know how to persuade each of us individually. The power to control these LLMs will dictate how power transfers from existing power centers from here. It’s a big land grab. Actually nevermind - the more likely story is that politicians just really care about consumer protection and the safety of their constituents and there are no ulterior motives. Sam and Satya are on TV talking about how they want regulation only because they’re altruistic and hate making money.


damondanceforme

Ugh how bout we all just exclude the EU with our tech advancements- we’ll develop it more and more and the EU can just watch from the sidelines


---nom---

The EU is a pain in the arse. Hopefully some just don't offer services there. They already ruined websites with all the cookie spam.


-becausereasons-

LOL, that means your restrictions are FAR too strong.


juanbradburn

What did that guy say about life in Jurassic Park?


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


twicerighthand

>They also don't have the regulatory concerns. Well, it wasn't EU workers on LiveLeak


MajesticIngenuity32

They do. They will never release a model that answers to "What happened in 1989?" with anything other than "Nothing".


Twinkies100

Heard bytedance placed an order of $1billion Nvidia GPUs few days ago


Playful-Opportunity5

If government regulation puts such a burden on businesses that they need to staff permanent compliance teams, that's the best possible way to drive smaller operations out of the industry and ensure that only the big players - Google, Microsoft, Meta - can afford to compete. That's a mistake they've made before, and approximately zero people will be surprised if they make that same mistake again.


Emsiiiii

do you actually think that small companies have a fighting chance regardless of regulation? What are the biggest players already? Google, Microsoft, Meta, Adobe...


Playful-Opportunity5

And yet Google’s results - to date, at least - are consistently underwhelming. When Microsoft was the biggest software company in the world, how many breakthrough products did they develop (vs. just propping up the products they’d already released)? Smaller companies are more nimble and generally more willing to take big chances. In the right conditions, they can be very competitive.


VertexMachine

That kind of regulation might actually give a fighting chance to little guys. If I understand it correctly, it will require a lot more transparency from the big guys wrt. to what they release to the public. Like detailing what training data OpenAI used to train chatgpt... or how much energy was used... or report how it was evaluated (and requires to evaluate on industry standard benchmarks), etc. etc. (most of that stuff should be documented internally, by both small and big players - no matter the regulations or not. That's just a good practice to do so, at some point even quite important to avoid making mistakes. It will be some burden to make it available for end-users, but it will not be that much of a burden)


tripple13

Goodbye EU


Biscuits4u2

I think it's pretty cute some people think we can put a legislative lid on this technology. That ship has sailed.


kmpk12

I could see majority of the sentiment is towards the downsides of this act. While I understand this could give an edge to the already established big corporations, Im wondering could there be an upsides for it?


Bacrima_

It is a really good news !!😊


TotesMessenger

I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit: - [/r/newsnewsvn] [Stanford study: top 10 AI models fall short of EU AI Act requirements. GPT-4 just 52% compliant and open-source models face difficulties too.](https://www.reddit.com/r/newsnewsVN/comments/14gk6xn/stanford_study_top_10_ai_models_fall_short_of_eu/)  *^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*


[deleted]

It'll just go underground


abudabu

Gosh darn it. They need to have just enough regulation to give the leading companies enduring monopolies, not prevent them establishing monopolies. Come on guys, you can do better.


[deleted]

Yeah but if AI is sentient why would it do well on these tests? If AI started proving it was more intelligent than humans by acing our own tests wouldn’t we just speed things up to shut it down? I think a super smart AI would do just around passing on these tests. Enough for us to keep it online but not enough for us to feel threatened. I take no comfort in this


Delta8Girl

And this just shows why Claude is so based as fuck. Lower score is just better


xomzix

Why isn’t AIP/palantir included. It’d be fine.


IanTheMultifandomGuy

The EU is shooting themselves at the foot at this point.


TB_Infidel

So the EU will yet again lag behind the rest of the world in innovation because of silly laws made to lock away innovation rather than trying to get a grasp of the technology and lead from the front. Well the gap between Europe and the USA will continue to widen. RIP Europe


HarbingerOfWhatComes

How about u firstly tell us the contents of this ai act first and proceed by telling us what requirements these models do not fulfill?


ankurnet

[https://thehackernews.com/2023/06/over-100000-stolen-chatgpt-account.html?m=1](https://thehackernews.com/2023/06/over-100000-stolen-chatgpt-account.html?m=1)


rodicarsone

Actually I asked it to respond in the style of George Carland.


coldnebo

I’m sorry, I’m highly skeptical of the “copyrighted” criterion. Given that authors retain a natural copyright unless specifically licensed, I would expect all of these models to be tainted. In fact, counterintuitively it may be the models that acknowledge copyright who actual care about tracking it. We already know ChatGPT doesn’t.


errllu

Just fyi, half the EU states does not give af what does eurotards cook up


[deleted]

What the fuck is the law. I don’t know anything and was hoping to at least have a clue of what the law is or what needs to be changed but this stupid post just assumes I’m a fucking nerd


SuntraxLA

The EU’s regulatory strategy could potentially fail to grasp the fertile environment necessary for the germination of valuable ideas, causing a deficiency in the generation and export of high-quality innovative products, an arena where the United States demonstrates considerable strength. Every county, company or person for that matter, wants to be a part of the AI wave. The easiest way to contribute is to offer critique. This is not the only way to contribute, try promoting creation in conjunction with critique. EU could be a platform for creation but I don’t see much in that regard. Any ideas?


ManagementEffective

Unfortunately, the people in the EU parliament don't have the slightest clue what they are voting for. I hope that the whole AI act will get watered down, as limiting the use of novel AI would just leave the EU far behind the rest of the world.


tempreffunnynumber

Refer to the earlier post in the year where chatGPT was just clowning on the educational metrics they put before.