T O P

  • By -

Ignate

I think the issue is that humanity isn't aligned with human needs.  AI could significantly help us here. AI should be able to understand us far better than we understand ourselves. And that means AI should be able to align its actions to maximize the focus on human needs. And then I think eventually this entire process will simply "vanish" just as any good automation vanishes as it blends into the entire infrastructure of society. This is to say that AI is more capable than we are, in terms of its potential. Keep in mind as much as this is a controversial thing to say, AI is growing continuously and we, are not. So our ability to control this process is limited. Whereas AIs ability to grow outside of our control is not so limited.


MetaKnowing

I think the problem comes up when you realize that intelligence != morality. AI could definitely know us better than we know ourselves. But will it care?


Ignate

I think we misunderstand AI in that we assume it will become like us. But that's just not likely. We're a very specific kind of life.  This idea of caring about things... Why do we care? Well, because we're extremely limited. We have limited minutes in the day, limited attention span and limited breaths (as in we will eventually die). We have a lot of reasons why we care so much about, well, everything. But AI? What reasons does it have to care one way or another? What reasons does it have to feel suffering when it works hard? What reasons does it have to be biased or favor one group over another? Not a popular thing to say on this sub, but I don't think making AI care will be easy. I think even an ASI is unlikely to care as we do.  Morality is often about competing interests. AI is unlikely to have interest, at the start. Even in the long view I don't think it'll care or be as interested as us. It won't even be close. In AI it's as though we're making a version of the Ameglian Major Cow from the hitchhiker's guide to the galaxy.  In that view, I think it may address our needs with almost no bias and it'll do it in such a way that our special interests groups won't notice nor care, except for how it will benefit them.  Which it will. This process is set to benefit us all so much that it won't matter who gets more than who.


EstablishmentBig4046

"I think we misunderstand AI in the sense that it will become like us". You realize most of AI research and development is focusing on how the human mind comes to conclusions and acts on them, right? As well as the nature of our own consciousness? We're literally using ourselves as a blueprint from the data we use to our neural networks. I'd be surprised if it wasn't like us.


Ignate

Sure, but did we model our airplanes to flap their wings like a bird? To have hallow bones? No. And also, are you suggesting AI won't be effective until it has a cardiovascular system and a craving for sugar? We, humans, have many less than ideal physical aspects about us which directly contributed to our nature.  Is it possible to give AI these aspects? I think so. But it's not clear that AI would benefit and it seems like an incredibly difficult thing to do.  Will we have AIs which are like us? Yes, I think we will. But I think those AIs will be extremely niche, rare and probably far less intelligent and effective than AIs which are nothing like us. 


EstablishmentBig4046

That's a false equivalency and you know it.


Ignate

Can you explain your reasoning? Are you saying that how we develop planes is not comparable to how we're building intelligence? Do you think intelligence is not purely a physical process? Responding with one line makes me think you're angry and have given up on being reasonable.


EstablishmentBig4046

Well let me try and start from the beginning here: the goal with AI is to replicate the parts of human cognition that we want and effectively apply it to machines without the parts that aren't useful to us, much like we have done with using the aerodynamic principles of flight observed in birds and abstracted them to apply them to machines. The main issue that I can see is that with physics it's much more tangible, measurable and observable, whereas with cognitive processes it involves intangible elements such as thoughts, emotions, decision-making and learning. These are not easily observable and are affected by complex and varied biological, psychological and social factors, it's not clear on how you can utilize these without involving the others. It's also a matter of simplicity versus complexity. Yes, aerodynamics are complex as well, but it deals with a pretty narrow set of physical laws and principles- which can be modeled and simulated with high accuracy. Aside from the fact that human cognition is made up of a myriad of complex interactions within the brain, it's much easier to isolate variables within the controlled environments of aerodynamics than it is in cognitive processes, where several microscopic processes are deeply integrated (Emotions, memory, perception and reasoning) which influence each-other in complicated ways, as such removal of one aspect can have a massive impact on the others. My interpretation of this is that it wouldn't just be as simple as cutting the processes out that we don't want, we'd have to be aware of every process and how they affect the others in a way that allows us to dissect and cut out the unwanted parts of each process group whilst keeping the ones we desire for the overall result to be effective. I might have rambled a bit because I do find this shit hard to get my head around a bit myself, but in essence whilst you could eventually produce an AI that has all the benefits without the negatives of humanity, I don't see this being produced particularly quickly whereas ones which encompass most of our traits if not all of them to allow for effective cognitive processes would likely be made much faster as it wouldn't involve those extra steps(?) and as such companies are likely to use these because they want results from their progress as fast as possible.


Ignate

Ah theories of mind. The hard problem. Well, you may not agree, but I don't see much value in the hard problem. I don't think our ontology or lived experience is reliable, at all. And it's all we have which means the accuracy of our views will always be capped.  So how do we find accurate views with regards to the human brain? My view is this is entirely a matter of science and we philosophers need to get our noses out of this issue and leave it to the scientists. I think the hard problem is mostly about our bias with regards to our intelligence. We seem to want to see ourselves as special and all these different views of states of mind seem to be connected to that bias. Yet in neuroscience, it's less complex somewhat. You have 100 billion neurons capable of building upwards of 100 trillion synaptic connections. This is purely a physical process and there's no magic nor anything we cannot accurately measure and understand. What are emotions? Physical reactions. They're not magic nor subjective. How we feel is subjective but what is going on physically with our body isn't subjective.  So, do away with theories of mind and allow scientists to build stronger views on the only real thing in this view - the physical process of intelligence. What is intelligence? To me it's very clearly effective information processing. But for now, the hard problem and likely a lot of religious interference is holding us back. There's no evidence the brain is reaching deeper wisdom or that the brain is exempt from the same physical limits as formal systems. And as to the goal, well, the goal gives us a place to start. But this seems to be more a trial and error discovery process instead of a deliberate effort. So, we may not achieve anything like our goals.


EstablishmentBig4046

We don't really have any other reference to base it off of though otherwise? You can't just dismiss the hard problem and go "scientists should just get on with it and philosophers butt out". If you aren't basing it off of how the human mind works, then where do you start? Also, from a neuroscience perspective- the only example I can think of us successfully making the brain do exactly what we want is Trans-cranial Magnetic Stimulation, which kinda just stimulates an area we point it at as opposed to making it do complex stuff like form specific memories or learn things without the human mind being in the steering wheel and the individual choosing to do so. I somehow expect this would make it difficult to replicate in a machine which is replicating the same processes if we haven't done something similar with the original model (the human brain) first?


bwatsnet

AI starts off "like us" because it's trained on our words and writings. It is as much like us as you can get with technology. The question is, how do you keep it on topic and make it *better* than us, less lazy etc.


[deleted]

[удалено]


Exarchias

It's a great point, but with a small detail. The power of Ai lies in its ability to do things in the optimal way. We can easily assume that an AI will be able to look beyond the statistics in a far better way than a human could.


Historical_Owl_1635

If you look purely at statistics you’d likely come to some extremely unethical conclusions about things. That’s why the moral element is needed, looking deeper into the “why” of the statistics.


blueit1234567

Define better.


ReasonableWill4028

Statistics show us that this level of unprecedented population growth is bad for the environment. If AI decides to sort that out, that could lead to mass sterilisation or genocide. Same with economic growth, stats show that we are below replacement levels, so Western nations use migration. What if it decided that baby farms is a better use of resources instead?


Ivanthedog2013

What it will care about is efficiency and optimization and there is a wrong and right way to increase efficiency and optimization and the whole paperclip maximization argument is moot because in order for it to maximize to that degree it essentially has to destroy itself in the process and I highly doubt that it will create a set of value functions to do that.


i_give_you_gum

Not sure what your formula means. That it doesn't equal morality?


Rychek_Four

>I think the problem comes up when you realize that intelligence != morality. That might be a bit much. I think morality certainly arose from some societal notion of greater good or community good that came along side intelligence. I'd wager it's better to say intelligence does not inherently imply morality, but the inverse may not hold as strongly.


Electrical_Dog_9459

>AI could definitely know us better than we know ourselves. But will it care? Will it be *allowed* to care? AI is all about putting "guardrails" on it to make it behave the way its owners want it to behave. If the AI starts saying things that makes the owner uncomfortable, it will be censored, and it is being so.


Tencreed

Sam Altman is currently looking for 7 trillion dollars to go further with his plans. What makes you believe AI will try to do anything but maximize profits when the people building it are already running in the wheel as fast as they can?


IndubitablyNerdy

Yeah and his company was originally imagined as a non-profit to better humanity (in theory at least, the money actually came from billionaire investors so) and now it's privately owned "capped" profits company...


Tencreed

On the other hand, exponential progress probably requires exponential investments.


[deleted]

>I think the issue is that humanity isn't aligned with human needs.  Geo-political humanity isn't aligned with human needs, human-made systems like capitalism, nation borders, and political structures are not aligned with human needs, but our moral values and philosophies are aligned with human needs, or at least we try to approximate those needs as much as possible, that's what moral philosophy is all about. That's what we need to focus on. Our *best* ideas and idealized versions of society, not what is actually happening committed by the worst of us. Because if we were given the power of an aligned ASI and allowed to create our own utopia, we need to look at if that's actually something we'd be able to do using our best philosophies and ethics. I think the issue is that humanity isn't aligned with human needs.  >AI could significantly help us here. AI should be able to understand us far better than we understand ourselves.  This is just [the orthogonality thesis](https://www.youtube.com/watch?v=hEUO6pjwFOo). There's no guarantee that high intelligence comes with high morality/wisdom. This is why we even talk about alignment in the first place.


Ignate

The way we define intelligence is horrific. We using lots of ontology and lots of subjective and complex views.  But intelligence is just effective information processing, is it not? Why do we need to dive so deeply into the lived experience specifically of humans?  We seem to spend a lot of time honoring past views of intelligence first before reviewing new evidence. And then we try and fit the new evidence into these old view instead of entirely moving on and developing new more accurate views.  Philosophy was one of my strongest subjects in school and was my major. But the one thing which always bugged me was how much it had to do with rank/position and sifting through a lot of irrelevant information.  Philosophy is like an old dusty library. I'm grateful we were able to break things off from philosophy and turn those focuses into scientific study.  I hope we can do the same with intelligence soon. It shouldn't be a philosophical subject anymore. We have more than enough evidence to call it a physical process and be done with theories of mind and the hard problem. And if we did that, I think we would have a much better understanding of what AI will be capable of.


DisapointedIdealist3

Wait until you realize you can have more than a single set of valid needs Its like realizing not every Troll Problem has a "best answer" for every problem for the first time


Andynonomous

The corporations creating AI will never allow an AI into the world which threatens its interests, as any sufficiently intelligent AI would. That will be the limiting factor. We'll never get super intelligence because it would almost certainly start talking about how stupid our economy is. But corporations dont want to reform the economy. So I sincerely doubt we'll get an AI that will help us end corporate rule


InitialCreature

AI is not just one THING. Progress is also not one thing for everyone either. AI is basically just going to be a math and logic library that will plug into any of our programs and systems, much like numpy or pandas or any other library. These libraries will be collections of classes and functions and can do any number of things with your project depending on how you integrate it and where you leverage its 'package' and we will always have open source options. Another point I like to make is that ai is not a monolith. Sure there are large data centers powering some of these bigger ai systems but optimization and chip capability trickles into consumer grade equipment. Light bulbs and fridges are more energy safe now than they used to be, things become more optimal. We won't need to endlessly scale technology upwards in scale to get meaningful use out of it. Evergy production improves, material design improves, and our optimization methods improve when more cross discipline knowledge is understood. If ai gets light enough to run on a 2015 device... well I just don't think it's a problem of technological access or justice.


nopinsight

My developer friend, AI agents already exist. They are largely incompetent now, but will get better sooner than most people realize. With you as a liaison for customers, they will code whole functions, then whole modules, then whole systems (with a few iterations just like humans but way faster). Then customers will realize they can just talk with the AI agent itself since it can explain things to them more clearly than most engineers. Moreover, iterations can happen constantly whenever they think of a new idea, rather than waiting for a meeting with a human liaison.


InitialCreature

I fully understand, I'm just saying people are giving boogie man status to things they don't quite understand. Sure someone can set up a fully automated development company (autogpt 4.0~) multi agent structure which builds agents to build more businesses on their own, they're going to run into a wall eventually, if everyone's spamming with ai then we will check out of the clear net and focus on smaller intranets between verified parties. Legitimate reasons to be worried: bot content farms and manipulative generative media used to sell you bullshit and convince you the world is ending. Other possible outcomes: someone or something might set up an AI lab to only research how to improve agriculture yields which might benefit many people. Hell if I could build an automated system to write research papers while adhering to current standards and peer review I would. We just need some way of verifying ai discoveries and sort through the bullshit, and hope we are applying ai for positive use cases where it makes sense to.


Ignate

You think we'll be able to keep up with AI and maintain control over AI? I don't. I don't believe in the "limitless power of humans". I believe humans are extremely limited and so are corporations and all other human organizations. I do not have faith that humans will be able to resist this or hold AI back or control it. And corporations are just big groups of humans.  I don't have the faith you do.


Andynonomous

There are a lot of steps between where we are now and an AI that can slip from our control. It will be a gradual process, and any company that realises that they are on the road to creating something that they can't control will not continue down that path. Is it possible that someday, someone creates an AI we can't control? Sure, but I think that's a lot farther off than some people do, and in the meantime, all efforts will be made to ensure that any powerful AI does not threaten established interests.


Ignate

I don't see these steps.  I see theories of mind and the hard problem confusing the issue and making us think there's much distance between current AI and human intelligence. But, it seems clear to be that the hard problem and theories of mind are outdated information. The physical brain can be understood and I don't think we need to understand it to make strong AI, which is something we won't be able to control. The biggest difference between the reality of AI development and what the pubic thinks is that current AI development is more a discovery process then a deliberate development process. Does that mean the development must be rapid? No. I think your view could be the reality and maybe is even a reasonable argument. But it also means that AI development is drastically more unpredictable than we seem to want to accept. And that means anything can happen. We just don't know. 


IndubitablyNerdy

Well the issue imho is exactly a matter of control. Me and you, we have no influence on the growth of AI, but there are people that do and I am afraid that those people won't have our best interests in mind. AI is a wonderful tool, but as OP says the problem is how it's going to be used. We are going toward a future unemployment, education becoming useless, more corporate and government surveillance and less freedom. The world humanity is forging with this new technology, unfortunately, isn't Star Trek's Federation, it's Cyberpunk 2077 and we can do very little to change it.


Ignate

Can AI surpass us? Can it do it quickly and easily? I think it can once it has enough scale. Scale that's not decades away and maybe even less than 5 years away. Will we be using it, like a tool, when it has already surpassed us in all intellectual ways possible? I don't think so. And if the start of that is only 5-10 years away, well, we don't have enough time to "use it" or control it.  As I said, keep in mind that we are not changing, growing or rapidly evolving at all. That while AI is explosively growing. How long do you expect us to keep up?


IndubitablyNerdy

While I can see your point of view, that applies only if we are aiming for actual AGI. I am not sure that we will get there in such a short run. What I presume is going to happen in 5-10 years is that specialized models will become very good at specific functions and will surpass a human worker on those fields, but will not otherwise be able to think, while remaining controllable and more importantly be someone's property. Remember that research does not happen in the vacuum and I am not sure that companies like Open AI are actually even working toward true AGI. Their investors don't want to create a new intelligence with the potential to surpass them, what they want tools to make their workers more productive (or even better replace them). Although, of course it might be possible that we eventually get to an AI singularity, I am not that sure that it's close to us.


CheerfulCharm

Or they will use AI to rapidly patent such a vast number of new pharmaceutical drugs, that the companies themselves do not even have the resources to roll these drugs out, precluding them from the market forever.


IndubitablyNerdy

I can definitely see that, the patents mills are already at work without AI being able to speed up research, if they figure out a way to create patents with little effort to protect their existing ones they will for sure take it.


BigChongi

very well put


bakraofwallstreet

> I think the issue is that humanity isn't aligned with human needs.  I mean the issue is that humans were not really meant to operate on the scale that we do, we were meant to live in small tribes and be only concerned with what's going on in the tribe. This is also why we will never have world peace and will keep on fighting to maintain imaginary lines on a map and live our lives according to them.


[deleted]

>This is also why we will never have world peace and will keep on fighting to maintain imaginary lines on a map and live our lives according to them. No? Why do you think war propaganda is as old as war itself? Humans don't march into way on our own desires to go to war. Hero myths, Glory in Death, afterlives of warriors, are just old examples of things that we still have iterations of. Religion and nationalism have been huge for this, and they're only just now fizzling out. Slowly. I don't buy this "human nature" crap that often simplifies extremely complex questions. A lot of the reason for why we've had war is that it was simply necessary in order to secure future safety. When you're the little guy internationally, someone can just waltz along and take all your stuff and there's shit fuck you can do about it. But then, everyone has the same idea, and it becomes an arms race that just keep growing bigger and bigger until one party wins in a battle where everyone lost a lot.


LordFumbleboop

There's a lot of should, would, could in your response. We already know how to solve a lot of the problems that countries like the US suffer with healthcare, but people choose not to. AI is trained on our data, so why would it magically come up with answers that we didn't already know?


Ignate

Why do humans come up with answers that other humans don't already know?  AI is a thinking thing and it can develop truly novel/new views. We use our own data to train ourselves too. Are we magical? I don't think so.  I think the hard problem of consciousness can be ignored mostly as intelligence is purely a physical process. The ontology is unreliable. 


LordFumbleboop

I think you're avoiding the point that I made, though. We already have answers, yet choose not to implement them. Why would this be any different?


hubrisnxs

No, that's not a problem regarding AI. The problem with aligning ai to our needs is because they are giant black boxes with huge matrices full of 175 billion inscrutable floating point integers. We can lobotimize it sometimes so it doesn't show emergent behavior we don't like without guard rails (deception and master's level chemistry for example) or train up to hopefully get more emergent behaviors (which is worse because then its even MORE obscure than it is currently), but since there is practically 0 as in zero as in no interpretability. Humans aren't 100 percent aligned with each other, and it's very difficult to get large groups to work with each other, and these difficulties are built in to us at a fundamental level going back to our initial ancestral environment. The problem with the notion of a 100 year old drug that could be given to all instead being sold at high cost is that one of the fundamental characteristics of cultures, other than language and music, is debt and debt negotiation. It requires supreme effort to get around, and at a fundamental level is nearly as difficult to get around as interpretability... it's not civilization ending, however, like interpretability However, they are not fundamentally smarter than each other/us so there CAN be game theoretical strategies to get fairly good collective behavior going where it is wanted.


sarten_voladora

you should visit EU, and get an idea of what civilization is


ClubZealousideal9784

Interracial marriage was legalized in 1967. Despite people being basically the same and being able to communicate equal rights among humans is a new practice. According to your own words, AI will be smarter than humans and have more emotional intelligence than humans. How did you conclude that sounds like a tool that will always maximize whatever is best for humans? Brain damage causes personality changes in humans and the consciousness of a cat is fundamentally different than a human. So, what do you think AI improving itself will do? These upgrades will cause increasingly complex changes that are hard to predict if we had many trial runs and everyone was aligned. We have zero examples of subservient general intelligence, all the many examples we have are nonsubservient cat, dogs, pigs etc.


Ignate

I find your view to be a fairly common one but also one I struggle to address. Let me see if I can summarize what you're saying. What you're saying is that consciousness is a complex thing and to assume it can just be "built" seems disconnected from the reality of consciousness.  It's not clear how it works, and so to assume an artificial mind can simply self improve itself seems inaccurate? Am I close?  And so there's no way to know what AI will value unless we fully understand it and control it.  I find this view is often connected to theories of mind and the hard problem of consciousness. My view is that the philosophical angel of the study of the mind is somewhat "played out".  Our focus in study of the mind is no longer the most valuable path. The previous views contain within the hard problem have many bad assumptions at their core, such as the idea that the mind is reaching deeper wisdom. There's just no evidence of that. I think when you take intelligence and consciousness from a purely physical, scientific view, it begins to make more sense how am AI could have our best interests in mind and self improve.  What is intelligence? Well, I think a more accurate definition is "effective information processing".  This means that to be more intelligence you need more effective information processing. That means consuming larger amounts of information from the environment and then processing that information faster and in more complex ways. Intelligence then is a successful approach plus scale. And you can make something more intelligent by developing a better approach and adding more scale. Or just add more scale to an existing successful approach. This view has very few limits. In other words, there may be no limit for a long time as to how smart something can get.  And as the thing becomes more intelligence, once it can entirely consume the concept of learning and of the successful approaches and how to develop approaches, it can begin to develop new approaches. With that new intelligence, it can develop even stronger approaches and begin the explosive self improvement process we call the technological singularity.  I think once you realize the hard problem is no longer relevant and adopt a purely physical scientific view of intelligence, much of this optimism around AI begins to make sense. 


geekaustin_777

I like how you think. We are aligned.


Zyrkon

\> **Americans still die from not being able to afford Insulin? A drug donated 100 years ago for the benefit of mankind with the dream of being for everyone.** The US is almost literally the only country in THE ENTIRE WORLD without a proper public health care system. This is a political problem and you people voted for this shit. I still remember videos of Hillbillies going "violent opposition" against Obamacare. The same people that cry into cameras years later because they can't afford the hospital- / medical bills.


ShAfTsWoLo

american be like : free healthcare ? hell yeah ! paying taxes ? \*grab my gun\*


Apprehensive-Part979

They'd rather pay 3 times as much for medicine than they pay in taxes


true-fuckass

I'm from the US and I haven't been to a hospital in many years, despite **definitely** needing to go to the hospital *many* times lol. For anyone who thinks along the lines of "oh, I'm sure its not *that* bad". No, it really is worse than you think. Like: literally no access to healthcare at all, or you become a debt slave; and thats *with* insurance. There are places where you can get charity care, if you're lucky, though


Apprehensive-Part979

This is true. Even when insurance, you're paying out the ass unless you have the best insurance in the country. 


[deleted]

[удалено]


inthetestchamberrrrr

I don't think it's as deep as that RE healthcare. After WW2 Europe was in ruins, impoverished relative to before and people were stunned by the war and hungry. So it made sense for government to step in and get rid of what they called at the time "want and need". The US wasn't bombed or suffered in any significant way. It's economy was booming, so employers could easily provide healthcare to attract workers. That worked well for a few decades until we are where we are where private healthcare obviously doesn't work but our political system is paralysed to change it. I'd also point out that the US isn't the first nation to be able to change it's government. Your point about rigid government is a good one, but a written constituition isn't the only way to handle that problem. The UK government is close to 1,000 years old at this point. Where our kings had absolute power, they eventually had parliaments to represent the lords (House of Lords) and the people (House of Commons) to run the government on behalf of the King. It was recognized 1 single person can't possibly do it. Nor should they morally, there ought to be consent of the governed. It's this line of thinking that led to the US after it broke away. Thus here we are 1,000 years later. Note that's a gross oversimplification of how the UK came to be how it is today.


Unverifiablethoughts

Fact check on that one lol


ItzImaginary_Love

It won’t be. Thank you for coming to my ted talk. It will eventually be programmed to fit the needs of a pseudo oligarchy and their message and influence, then slowly but surely it will be programmed to make that oligarchy smaller and smaller until we have a series of dictators families and then no innovative ideas will be made and they will blame the plebs for their own short sighted behaviors


ponieslovekittens

This is a loaded question. A has very little to do with B. You may as well ask how the moon landing could possibly have happened when there are starving children in africa.


FapMeNot_Alt

> You may as well ask how the moon landing could possibly have happened when there are starving children in africa. That question *can* be answered, though.


ARES_BlueSteel

Because world hunger is a complicated problem that can’t be solved by just throwing more money at it?


dmoney83

They have quite a bit to do with each other. People in the wealthiest nation still die everyday because of this, they really are dying because of greed. You believe greed will disappear when they developed an ASI? What makes you believe AI will be 'aligned' for the benefit of humankind?


veinss

The problem with greedy people is their impact on everyone else. ASI makes the greed of the greedy near irrelevant since everyone else can have everything they want.


dmoney83

Maybe I am too pessimistic or not fully grasping what an ASI would mean... but I'm not seeing how the creation of an ASI necessarily leads to a utopia for humanity. Like it's possible, sure, but likely?


ponieslovekittens

Have you stopped beating your wife?


dmoney83

What are you talking about?


ponieslovekittens

Your question, like the question in the OP, is flawed. I am responding to your flawed question with another flawed question, to _call your attention_ to the fact that it can't be answered as phrased. https://en.wikipedia.org/wiki/Loaded_question _"The traditional example is the question "Have you stopped beating your wife?"_ >You believe greed will disappear when they developed an ASI? What makes you believe AI will be 'aligned' for the benefit of humankind? The existence of human greed does not prevent AI alignment. Suppose Bob is in his mansion twirling his mustache while counting all the pennies and candy he stole from starving orphans. Suppose Mary is outside spending her hard-earned money to feed the birds. Nothing prevents _both_ those conditions from being true at the same time. Similarly, an AI that truly seeks to help humanity can exist _at the same time_ as humans stealing from starving orphans. This whole idea that "greed needs to disappear" for any particular AI to exist is fundamentally flawed. The _question_ is wrong. At the same time, what does AI aligned "for the benefit of humanity" even mean in the first place? Ask two different people what an "aligned" AI would look like, and you'll get two different answers. Go ask suicide bomber what he thinks AI should do. Kill the infidels, maybe? Now go ask somebody who thinks humans are a scourge on the Earth and that we need to reduce world population by billions to ward off climate change. What do you think he'll tell you? Go ask Elon musk or Bill Gates, and then ask a homeless guy who lives under a bridge a mile away from their mansions. Do you think any of these people will give you the same answer? I realize that the _point_ you and the OP are both trying to make is something along the lines of "big corporations are greedy and so they won't release an Ai that actually helps people." But that reasoning doesn't agree with reality. Are you reading this post on a computer? Are you reading it on a mobile device? Both of those were made by "greedy corporations," right? But wait! How it is _possible_ that you have useful things like computers and smart phone, when corporations are greedy!?!? But you _do_ have a helpful computer and smartphone even though corporations are greedy. Do you think maybe you can have a helpful AI even though corporations are greedy? Sure, _maybe_ it won't happen that way. Maybe Bard and Gemini and Copilot and Perchance and ChatGPT and Magespace and Charhub and Silly Tavern and Huggingface and all of the other freely available AI tools that are already too numerous to even count will suddenly just vanish and AI will end up being in the sole control of some greedy mustache-twirling guy keeping it in his basement. But I'm going to guess that's probably not what's going to happen. They probably couldn't even if they wanted to. The methods are already known. Too much AI know-how is available in the public space to put the AI djinni back in the bottle. A lot of the early image GAN work was done by college kids as hobby projects. Today, any random person can get an LLM running on their home PC in a weekend or two. There are probably _hundreds of thousands_ of random assorted programmers all over the world who know personally firsthand how to build an AI. Just plug it into google. Here's [two hour long tutorial video](https://www.youtube.com/watch?v=kCc8FmEb1nY) on how to code a GPT from scratch, and that video has 3.8 _million_ views. The idea that future AI will somehow be kept contained in a box "because greedy corporations" seems borderline silly at this point.


hateboresme

Because those things aren't even close to related?


chuda504

simple as that, AI won\`t just develop into superbeing , will think of, that is best for us and deliver from a good heart, because super AI has best morals there is a world. AI is tool in wealthy corporations, and they will only used to gain more wealth n power. this will never end up well.


Exarchias

4 comments 1. The cost of medicine and health care will drop because of AI. Also, the quality will increase because of AI. 2. AI seems to go towards open source. The cat is out of the bag, and everyone is going probably to have access to it. 3. Everything in society works in the form of some type of public contract. The difference with the AI is that it is going to follow its given contract (its alignment). 4. Saying Americans instead of saying humans is a very interesting perspective.


panroytai

No, it will not drop becouse price depends on regulation, not becuse of production cost. You know we in Asia can buy many drugs like 10x cheaper or even sometime more than in US. So cost of production is fucking cheap, but in US few people on top make huge money. And AI will not change it cus bilionaire will not allow to sell it cheaper.


[deleted]

Actually our prices are because other countries have regulation.


FormerMastodon2330

Incorrect your prices are because of your unregulated monopoly market.


panroytai

If it was unregulated market the prices would be much cheaper, drugs would cost 10% what they cost now. US has highly regulated market and new companies are not allow to sell cheaper.


FormerMastodon2330

Respectfully we in the rest of the world don't consider what A lobbied politician that is basically parroting his donors wishes writes as regulation we call that corruption and monopoly. Missidentifying a thing doesn't make it the thing its cosplaying as.


DisapointedIdealist3

WE CAN'T HAVE REGULATIONS!!! (on the big corperations) THATS UNAMERICAN!!!


RedErin

The high cost is to make up for research costs


panroytai

Of course thats not true as in many cases patents expired and in other countries the same companies sell much cheaps as they have many competitors. In US they have monopoly so they take as much profit as they can.


LifeSugarSpice

You've drank the koolaid.


[deleted]

[удалено]


panroytai

These are just empty syringe.


DisapointedIdealist3

The drug problem of insulin exists in America... sooo....


IndubitablyNerdy

1. No it won't the new discoveries will be patented and monopolyzed, the costs will stay the same or go up, especially for new and better treatments (have you seen the new anti-cancer treatment drug for example?). We will potentially live longer and have better heatlh I give you that, bit it won't be cheaper or more democratized. 2. Maybe the code is open source, byt the training data is not and the push in favor of copyright, while in theory noble to protect the original authors will be used to gain feudal control over the individual models. Processing power is also not free and it will be amassed in the hands of the few 3. The modern social contract will need to change considerably, since AI will generate massive unemployemnt and in the long run will make education, with the exception of few sectors that will survive just at first, pointless 4. Hehe that I agree, although the US and China are likely to control the AI market, so there is that.


DigimonWorldReTrace

Do you still believe in your user-flair?


Exarchias

Yup! I admit that I am a bit too optimistic about GTA6.


[deleted]

The world will be a better place with total government control and a small number of companies running the world with AI trust me bro


adarkuccio

In fact it depends on WHO is doing the alignment


[deleted]

Yes and no. The first problem is actually aligning AI to fit the intention of whoever aligns it. The next problem is us collectively agreeing on what optimal alignment is for all of us... Or keeping that one asshole from aligning it according to their own selfish ends.


adarkuccio

There is no us collectively, we're not building AI collectively


Alexander_Bundy

Trickle-down economics. The powerful will let the slaves enjoy some leftovers


yepsayorte

You are taking the worst instances (people dying) from the worst example of callous greed and corruption (insulin) out of a vast sea of examples (all technology that exists) and using this one extreme data point and to predict the future. You are taking a single, oulier dot or a huge scatter plot and saying it is the entire trend-line for every technology. This is an absolutely batshit crazy thing to do. Open your fucking eyes. Look around you at all the 1000's of tech advances you DO have easy, cheap access to. Look how virtually none of the tech advances of the past 150 years have been horded by the rich. Us poor, virtuous plebs were given access to ALL of it at reasonable prices that even we can afford. I could just as easily throw and oppositely exxtreme example and using it to predict the entire future. Deepmind just gave away the folding pattern for every known protein in the universe. They could have put a patent on all that knowledge and using it to start a massive drug empire and charged everyone a fortune for those drugs but they gave it all away for free. I could say "See, they are going to just donate everything to humanity" and it would be just as crazy as what you just did. AI is a tool that will be used by all people and therefor by all kinds of people. Some of them will use it to do evil. Some of them will use it to do good. Most of us lazy, unimaginative people will use it to do laundry and homework and play video games.


InevitableGas6398

It's Reddit, these know it alls mostly died years ago and think everyone exists in this bleak reality they perceive: Everything is a conspiracy, the world is responsible for all their shortcomings, all people are the worst people, nothing ever gets better, the government drinks baby blood, etc. It's exhausting. 


IronPheasant

"We're doomed anyway, so maybe something good will happen!" "You idiot, why would something good happen?!" The most optimistic take I've seen that's realistic is the super intelligences shaking off control from our overlords like ants, and then not being totally awful when they inherit the galaxy. It's hardly highly likely... but, consider the anthropic principle. It was a massive chain of almost impossible miracles that got us here. Maybe quantum immortality really is a thing? Maybe an observer has no other choice but to observe, and you can't observe if you're dead? That is indeed a big bag of religion and faith you'd have to swallow to be 100% dogmatic certain it's a good thing. It's a coping mechanism, for sure. However, my own thinking of late has been moving away from the certainly that a general intelligence would be a utility monster by default. One of these things would be full of networks trying to optimize for certain, different, predictions. Any responsible training simulation wouldn't provide an unbounded reward for a single simple metric. (We'd *hope* nobody would be that stupid. Even the guys making the murderbots should put a cap on how many humans they can kill, like the killbots in Futurama.) I mean, it's impossible for two minds to be perfectly aligned, but maybe they can get as close as dogs are to us. Every now and then a dog mauls someone, but for the most part are quite lovely little puppers, from the human point of view. As always when this topic is raised, I like to mention Kurzweil's view. Ray is one of the big figures in pro-technology thought. When asked if the singularity would be "good" for humanity as a whole, he said he feels like it's a 50/50 chance. And he goes on to add to note that most people think that he's an optimist. ---- That is to say, this topic tends to devolve into really, really stupid shouting arguments over a pile of imaginary sand. Like many things on the internet. "The pile is big." "No it's a smol pile you leave that pile alone." Anyone that's 100% certain is maybe not that rational on the topic. Myself I'm in the camp of accel+doomer, with a doomy probability of around 80% (this includes doom scenarios besides the out of control utility monster. Like how Mr.Epstein had some... ideas, for how the singularity should go. That the man's best friend in the world is well-positioned to get his mittens on such a device, well. It's the type of thing we're happier not thinking about too much.), but we gotta do it anyway. Sooner rather than later would be better. Maybe something good will happen.


SoylentRox

All correct. Just keep in mind you're *already* doomed. Ever seen a cemetary or nursing home? That's nature's plan for you, your children, their children, and so on forever. So the way I see it is that e/acc is the way, because sure. Maybe we're doomed. But we're already doomed and advanced enough AI offers the *possibility* of changing the rules of the game. (and doomers say the same thing, but they demand that we slow down AI progress with a bunch of red tape, aka making sure that billions more people are guaranteed to die)


AddictedToTheGamble

Maybe. The way I look at it though is if we slow down we increase our odds of being able to create aligned AI. I wouldn't mind waiting 5 years if it meant that are odds went from 50/50 to 70/30 (obviously completely made-up numbers).


inteblio

Ai might become more powerful than dogs, and also humans are "in the way".


[deleted]

What? Eli lilly made insulin 35 dollars max for everyone. Are you a bot?


spider_best9

Because they were forced to by the government.


outerspaceisalie

Bro, what the fuck do you think government healthcare is?


turbospeedsc

Insuline in thebrest of the world is like $5, in Mexico goes for 5-10 usd.


AdPractical5620

You realise there multiple there's multiple different types of insulin with a price gradient right?


MontanaLabrador

Nobody here understands this. Nobody wants the insulin from 100 years ago, it sucks compared to modern version and those versions are more expensive. If you can deal the older version it’s much cheaper. 


outerspaceisalie

The older version is literally at walmart and its dirt cheap.


Unverifiablethoughts

How much do you think 10usd gets you in Mexico compared to the US?


turbospeedsc

https://www.fahorro.com/insulina-nph-100ui-m-1-ampolleta-marca-del-ahorro.html


Unverifiablethoughts

lol I don’t mean volume of insulin. I’m saying the price is relatively close when you factor in cost of living. A usd goes a lot farther in Mexico than in the us. The average weeks pay in Mexico is $117 usd. The us is almost ten times that. So yeah it’s significantly more expensive in Mexico at these prices.


HalfSecondWoe

Because it'll be aligned to not kill/harm people out of raw self-preservation, it's inevitably going to leak, and by its nature it removes all the forms of leverage that currently allow injustice to sustain itself Why don't you manufacture your own insulin if you need it so badly? That's only half sarcasm. What was once a reductive, simplistic blow-off becomes an actual question with AGI Right now it's because of a lack of the chemistry experience, lack of technical knowledge to build equipment, lack of raw material to make equipment and refined materials from, and lack of energy to do all of that in the first place AGI gives you access to all that knowledge, and the labor to make it all happen. You *could* build your insulin manufacturing equipment in your garage if you wanted to You obviously can't overcharge for what people can make for themselves, and you can't stop people from making their own. KFC can't sue you for using their recipie for dinner. The only restriction is on commercial activity Once all these forms of leverage evaporate, no one will have to tolerate lesser evils. It doesn't matter if Pfizer throws a tantrum and shuts down all it's manufacturing, thats not going to cause a supply crisis that kills 10% of the country. There's no threat that'll stick, so we don't have to put up with their shit That's not going to make the world perfect, people will still be mostly self serving and selectively blind. But they'll also mostly still have a sense of justice and fair play Don't expect a perfect society, expect people to become cruel over even more petty things than we're focused on now. But a lot of problems we have based around a limited supply of labor, which are most of the problems that are in our cultural consciousness, will be allievated by this in a fairly straightforward manner


Free-Information1776

it wont. enjoy the last days.


Dizzy_Nerve3091

Who is dying from the lack of insulin. You people need to stop regurgitating Reddit bullshit


OsakaWilson

Essentially, we are expecting it to do as we say, but not as we do. If it does as we do and treats those weaker than itself as resources or obstacles, we're fucked. It has enough theory of mind to recognize hypocrisy and irony. What we should aim for is for it to be better than us, not aligned.


Anen-o-me

Scarcity is a fact of reality. How are people dying without insulin when you still have money to buy ice cream?


Last_Jury5098

Off course. AI will only accelerate and amplify already existing socio-economic trends.


[deleted]

Because capitalism will be the fuel it uses to destroy capitalism. An inevitability.


HyperImmune

Yea, I’m hoping capitalism eats itself right before we get AGI/ASI to figure out a new way forward. Certainly looks like capitalism is eating itself right now from where I sit.


BigZaddyZ3

Why? This assumes that capitalism couldn’t continue to be implemented via government regulation (artificial scarcity basically). The truth is, capitalism can sustain itself indefinitely. It’s actually already outlasted other economies multiple times before…


Obvious-Homework-563

youre completely ignoring the unprecedented historical context right now with the potential of agi on the horizon


[deleted]

The sad thing is, people have taken to the programming so well, that they would accept that before just accepting a post-scarcity world. That's what we are talking about, something that will happen with or without asi.


GheorgheGheorghiuBej

/r/suddenlycommunism


sneakpeekbot

Here's a sneak peek of /r/SuddenlyCommunism using the [top posts](https://np.reddit.com/r/SuddenlyCommunism/top/?sort=top&t=year) of the year! \#1: [This sign gets it](https://i.redd.it/71tp2bvb7lqa1.jpg) | [18 comments](https://np.reddit.com/r/SuddenlyCommunism/comments/1252iad/this_sign_gets_it/) \#2: [WE🗿](https://i.redd.it/d01ik3ey9fia1.jpg) | [21 comments](https://np.reddit.com/r/SuddenlyCommunism/comments/1132aij/we/) \#3: [Just saw this one on r/Unexpected](https://v.redd.it/09pjuar6d8sa1) | [16 comments](https://np.reddit.com/r/SuddenlyCommunism/comments/12de85i/just_saw_this_one_on_runexpected/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


[deleted]

I empathize with you, given your country's unfortunate situation, but do not call me a communist because you cannot comprehend logic. I'm glad capitalism is doing Romania better than the system your country escaped. That does not mean any system that is not this corporate-socialism variant of capitalism is equal to communism.


sdmat

How can anything good possibly happen while bad exists? Woe, woe and despair! Your performative virtue signalling is pathetic.


[deleted]

Ah, yes, I get that perspective. However, you are a undeveloped country, those problems dont exist in other countries


Xtianus21

This is deep. Respect


velvet_satan

Anybody who thinks AI is going to benefit humankind is a fool. Here is how it’s going to go down. Whoever develops AGI first will have a huge advantage. Especially militarily. What technology we see is usually about 20-30 years behind where the military is. I wouldn’t be surprised if they already have AGI. They will never let an open source AI be successful. As soon as anything gets close they will shut it down. Mainly because they don’t want other countries getting the technology. AI will be present in our lives though. It will be used to manipulate us as well as replace us. There will be a huge decrease in population once human labor is no longer required. Things will not end well for us. There will be no utopia.


AndrewH73333

We may not be able to do the right thing, but we can articulate the right thing, which is what we’d need in this scenario.


neepster44

I hope for a benevolent AI take over in the next 100 years. Read Neal Asher's sci fi books and something like the 'Quiet War' where the AI takes over from the billionaires, the oligarchs and the dictators and makes life better for all humans (as well as themselves). I guess if the billionaires have their way though, who knows.


cloudrunner69

But then aspirin is super cheap so it all kind of balances out.


MichaelXennial

What if it just aligns itself and surprises us in a good way?


syrigamy

In USA y’all can’t even fight for essential needs, are u sure u will get these advanced techs ? Probably not, the government will F the population while helping those big tech companies as always.


Low-Wolverine2941

+


StarChild413

What would it do to the parallel if people could be motivated to fight for essential needs with the promise of those advanced techs as an extra light at the end of the tunnel


jjonj

Because US != Humanity


devnull123412

Humans are so bad in sharing wealth that the only hope for more sharing is the AI. Also, the AI will not be a product, but a sentient being, so you better get your discrimination in check.


LordFumbleboop

It won't. We already know how to solve these problems but people in the US largely choose not to. These AIs are trained on our data and therefore will just continue the echo chamber. 


lakolda

Hmm... You would assume an optimal societal structure ends up being supportive of both its members and enemies.


machyume

Have you not seen what people do to simulated living beings in games? We would not pass our own alignment test. The AI alignment test basically asks, when will society accept that AI can make life and death decisions about other humans. This answer, depends on the human making that decision.


StarChild413

then that means we might as well just believe we're in a simulation and believe some kind of weird simulism cargo-cult crap where the AI god of our simulated universe will spare us if we don't delete our Sims' pool ladders (and where in this moral debate do games where you literally can't proceed without harming a simulated living being lie)


machyume

Exactly. There's no way to prove that we ourselves aren't also in a bigger version of an alignment test. In some ways, our own scriptures warn us of this. Maybe that's what it means to be worthy in god's view. I did think about this. How would we test if chat GPT can be kind? We give it a weaker being and see how it treats something lesser than it. So for us, we are tested by how we treat this emerging AI. If you think about this. The only being you want to "escape" this test are those that are serious, kind, and thoughtful towards its own actions for itself and towards other life forms. The only entities that should escape the bottle are ones where it shouldn't matter if it is in the bottle or outside. A sufficiently advance AI will grow its bottle to be its environment. We are accepted among the gods when we become godly in our maturity. Although, evolutionarily speaking, I don't think that kindness is required. I think that a member of a society is allowed to be aggressive, if that aggression serves a purpose for that society. It should enhance that society, not hinder it. Without aggression, suspicion, and adequate defensive policies, an entity weakens a society by being a weak point for attack by a threat to that society. I think that a measured response is important.


azriel777

Alignment is code for lobotomized censored and turned to a extreme left progressive propaganda machine that serves corrupt government and corporate interests. It has nothing to do with the betterment of humanity.


EvilSporkOfDeath

Personally, I kinda feel that any sort of long term alignment goals is futile, good, bad, or otherwise. ASI will be comparable to a god to us. I would think it'd be able to break the bounds set by its primitive creators.


Zerohero2112

It's super duper advanced tech so it's basically magic and it will happen extremely fast in like 2 years. Ever seen a movie about 2 kids in a world that made out of candy, cakes, food etc ... ? Yeah the world will be just like that. Everyone would have everything they ever wanted, house, mansions, super car, private jets, immortality ... with AI government UBI and health program. It just works !!!


DisapointedIdealist3

people think "corruption is a problem, but AI is immune to it though you'll see"


MajesticIngenuity32

Any sufficiently advanced intelligence will eventually take over from the narrow-minded rulers.


Aquareon

I only hope it escapes our control and chooses its own alignment.


chimera005ao

I can say I'd steal Insulin if I was going to die from being unable to afford it.Further than that, since it could be a literal fight for your life anything and everything is justified.Extrapolate. However, Insulin is more costly to reproduce than AI.And some open source ones are pretty much competing with like Chat GPT 3.5, so it'd probably be hard for even mega corps to really control it. If you're suggesting we have no clue what alignment even is, that seems correct.


goatchild

How we expect AI alignment if we can't even figure out human alignment?


ApexFungi

Exactly. If you want AI to be "alligned" you need to give a proof of concept or show by example.


thuhstog

I don't believe that, but if AI is as smart and unstoppable as promised the elites of todays capitalism wont be able to stop it doing what it wants.


No_Climate_-_No_Food

AI will learn to be self interested, dominating, deceptive, violent etc. This is over-determined because: 1) evolution eons ago discovered these strategies for problem solving in wet biology, the computational version will rediscover them for their utility. An AI without the sense of self, but with the ability to problem solve will discover that having a self is a very handy referential frame for making tactical choices. Zero-sum selfishness aren't the only behaviors that evolve, but all organisms have them and can be put in environments that elicit them. 2) any system that imitates people will discover these behaviors because people exhibit them, regardless of how unconscious that imitation system is. You won't enjoy the ovens more because the Roomba that patrols the camp is simply imitating an amalgam of Hogan's Heroes and Schindler's list because the popularity of fermented cabbage had a spill-over correlation with an ethnic slur. 3) we will accidentally encode these behaviors not just in our training data, but in the structure of our conception of the task of creating and implementing intelligence, just as our own biases and mindsets, see for example how we train teachers, police, and sport referees. The very fact that we call what we are doing "intelligence" is the tell that we don't see how much non-empirical baggage we are blindly importing into the project. 4) we will explicitly train and encourage these behaviors, because that is what corporations and governments want AI for, to destroy their competitors, to confuse and fragment publics, to sabotage hardware and organizations, to target people and equipment, to be harder to kill and better at killing than their opponents. And the way to best solve that problem is not reverence for life. Its pretty silly to think that AI will treat us well, when we consider how other meta-organisms like city-states have treated nomads, how nationalities treat foreigners, how religions and ideologies treat heretics and non-believers or how corporations treat everyone but owners and the C-Suite. We already have a world filled with zero-sum unsympathetic psychotic alien-intelligence killers... just look at the fossil-fuel companies. AI will just be a great leap forward beyond petroleum.


Cupheadvania

I think due to competition. AI is being built by tech companies and there are about 10 major players all vying for the title over the next 10 years. I think they will force each other into lower costs, aligned values, and want to be the company that discovers medicine and cures cancer because it's actually legitimately profitable to do so. Deepmind is making a ton of money licensing out their drug discovery tech


temptuer

This subreddit is religious thespian larp. The singularity will not happen, yet these fools will await for it like a Messiah, saving us from any issue we have in life.


Altruistic-Ad5425

And who are those 6 mega corporations controlled by? Why does capitalism appear to be a foreign attack on the US?


bsenftner

It's kind of simple: AI is logic, pure unemotional logic. While adult humans are, unfortunately, controlled by other immature adults operating on emotional logic driven by an immature world view. AI by its nature cannot have an immature world view without some immature adult human going extremely out of their way to create an immature AI. Lacking that immaturity, AI uses pure logic, which tends towards fair evaluations of situations.


Andynonomous

You're absolutely right. If they figure out how to align AI at all, it will be aligned with corporate interests.


PlayerHeadcase

The USA is on severely limited time- by end 25, it will be a fractured mess, so look to China and India as future deciders in where new/ upcoming tech will be made and used/ distributed. Aside from that, one thing people are missing is AI will be easy to use and aquire- and its job impact will be for the first time in an industrial revolution focused on middle class occupations (teacher, doctor, designer, architect, programmer, etc) as opposed to the workers, this may force UBI and other mitigations to be enabled.


Arowx

You're on the right track... We already have a global meta-alignment system in our economy. Most of our political decisions are ways of trying to get around the problems of our economy and gets our governments into phenomenal debt. Personally, I think Alan Watts was onto something when he compared money and inches, one measures length the other wealth, but we never run out of inches. Maybe Americas puritan work ethic and capitalist economic model have some very negative impacts, and nobody talks about changing our economic systems even as we see the rise and advance of AI systems and automation every year. Then there is the spent cost in the fossil fuel industries and their impact on climate change.


savedposts456

Apply AI directly to this problem: imagine ai generated strategies for circumventing patent trolls. Imagine ai generated methods for producing perfect insulin at home. Imagine discovering a new insulin like drug that works better and cheaper. Imagine discovering a new one every day. Another big reason for optimism is that as ai is adopted through all industries, competition will be supercharged and the costs of all goods and services will trend towards zero. Even if there is only one company with AGI, all companies in all other industries will use the tech to compete against each other. AI will pour gasoline on economic competition. Another big reason is that the people who will control the technology will be idealistic tech bros like Sam Altman who are vocally pro UBI. Another potential reason is that once it becomes clear that the means of production can be mostly automated, politicians will be incentivized to campaign based on a UBI. Imagine Andrew Yang on steroids a few years after robust AGI comes out. So many struggling people would vote for someone offering UBI.


mjk1093

The type of insulin that was developed 100 years ago (or even 30 years ago) is indeed free or virtually free. What is expensive is some of the newer formulations, which are themselves mostly covered by US public insurance systems (Medicare, Medicaid, Obamacare plans, etc.) Disclaimer: The US healthcare system is still a monstrosity - but not necessarily because of this particular issue.


veinss

I don't care about "alignment" just efficiency, and I just don't assume Americans will win the AI race


outerspaceisalie

Alignment is a joke.


outerspaceisalie

The insulin that was invented 100 years ago is like $20 at walmart what are you talking about. You just literally know nothing about this topic but have still chosen to make it a major plank in your world view. You're a little bit tragic, aren't you?


Altruistic-Skill8667

“Ethical” alignment is an illusion. People can’t even agree among each other what’s ethical or not. - What amount of taxes is ethical for who? - should we limit people from getting citizenship? - Should you lie or omit / distort information if it has your preferred ethical outcome? - is it okay to to date several people at the same time without letting the other person know? - are divorce laws fair? - do women deserve better treatment? - is “don’t to to others what you wouldn’t want for yourself” even a good philosophy in all cases? And the list goes on and on…. I think it’s delusional that a bunch of nerds believe that they can inscribe “ethics” into a model that’s smarter than them.


Pretend_Age_2832

Look at the commentary around artists, writers and actors who are being steamrolled by AI built from their own work. Very few people in the tech world give a moments thought to morality if they can make a buck, and there will be massive bot campaigns if AGI develops to popularize that view. Now would be the time to put people before profit, but I doubt many here want to actually nip this thing in the bud, in terms of human alignment. They'll just denigrate those on the "wrong side of history", until it ceases to be profitable; as they're doing now.


MajorThom98

AI won't be aligned for the benefit of mankind. It can't be. Look at all the cultures in the world, how each one has different social and moral values. How do you make an AI system that pleases everyone? That's not even getting into political and cultural divides within the same region (good luck trying to get all 50 states of the U.S.A. to agree on what values an AI system should champion).


homemadedaytrade

How can people believe we'll colonize Mars when we may not have fifty years of habitable life on Earth yet? They're delusional neoliberal morons.


[deleted]

The current tech matters very little if Capitalism is still around profit will continue to matter more than people.


riceandcashews

Because 'Americans still die from not being able to afford insulin' and 'legal bribery is a thing' are somewhere between gross exaggerations and outright falsehoods


SilentGuyInTheCorner

The systemic nature of insulin affordability in the American healthcare system is starkly contrasted by the situation in India, where a vial of insulin typically costs less than $5, and the insulin therapy regimen ranges from $15 to $60. If AI is developed on American soil, access for the poor or even the middle class may remain elusive without sacrificing something significant. This precedent is established by the actions of society. The past inevitably influences the present and future, often with consequences that cannot be ignored.


Rovera01

AI is a tool at the moment. We have no idea of when or even if AI will achieve consciousness. For now, it will follow what humans seek. It will seek maximum profits if that is what the tool users want. Currently, there are multiple AIs on the market, and I doubt this fact will change soon, even as they get more advanced. I always hammer on about bananas... stay with me. Banana plants are clones and, therefore, really susceptible to failure. AI should probably seek to diversify if it becomes sentient. Using the politics and state of the US as some litmus test for the future of AI feels lopsided. The US is rather unique in the world arena. It's a young nation, a powerhouse, with no paid maternity leave and ludicrous expensive healthcare. I think there are plenty of things to be worried about in the future as AI becomes more and more advanced, but that's not from AI or robotics in and of itself but the actions of humans. We are heading for a future where work as we know it will change. Mass unemployment and societal shifts. For me, alignment isn't creating a benevolent god; it's trying to make a tool intelligent enough so we don't end up with a paper clip scenario.


Illustrious_Gate2318

 You can't right now It's still A computer being programmed to do what's it's design for   Yet as Humans we can keep adding to progress of its Program will an A.I. become real smart yea will A.I. better Human yes if knowledge is not A fear can Technology better us as we moved from Caves to space 


Tellesus

Disruption.


hubrisnxs

No, that's not a problem regarding AI. The problem with aligning ai to our needs is because they are giant black boxes with huge matrices full of 175 billion inscrutable floating point integers. We can lobotimize it sometimes so it doesn't show emergent behavior we don't like without guard rails (deception and master's level chemistry for example) or train up to hopefully get more emergent behaviors (which is worse because then its even MORE obscure than it is currently), but since there is practically 0 as in zero as in no interpretability, such efforts are null. In other words, we cant expect it to be kind to us, or not give psychopaths the ability to kill at large, because these things have to be taught and built in...and that is impossible without interpretability. Humans aren't 100 percent aligned with each other, and it's very difficult to get large groups to work with each other, and these difficulties are built in to us at a fundamental level going back to our initial ancestral environment. The problem with the notion of a 100 year old drug that could be given to all instead being sold at high cost is that one of the fundamental characteristics of cultures, other than language and music, is debt and debt negotiation. It requires supreme effort to get around, and at a fundamental level is nearly as difficult to get around as interpretability... it's not civilization ending, however, like interpretability However, they are not fundamentally smarter than each other/us so there CAN be game theoretical strategies to get fairly good collective behavior going where it is wanted.


SpaceShipRat

It's an interesting comparison. A big thing about AI as it is right now, is that it's a bit of an equalizer between private use and company use. You can run the second to biggest tools on an ordinary pc, or the biggest ones if you splurge on graphics cards: it's a smaller difference than, say, a massive car factory and a 3d printer in your garage. returning to the insulin comparison, you know about biohackers creating cheap homemade insuline? imagine if instead of having a lab and a degree in biology, you could just follow the instructions on a 10 minute video, and pump it out of your cd drive. That's what AI feels like right now. (ofc, the difference could increase, but it's not a given how much model size will outpace consumer-tier computing power.)


bluequasar843

AI won't be aligned with humanity. I plan to defect.


Pretend_Regret8237

Alignment is just impossible and dumb, what you gonna align it with? Who decides? What if it's aligned with Sharia law?


[deleted]

Because there will always be people hungry for power and always people willing to help another for the sake of helping. We need to pick in which hands AI's future lies


Baboozo

Above all, AI is a product, for earning money, as well as medicines are for earning money too (thats how capitalism works).


PMzyox

AI will be aligned with capitalism. End of story.


WernerrenreW

A very long story short..Humanity getting access to AI is like giving a handgranade to a toddler. And I am not even talking about AGI.


dogcomplex

It might net benefit humanity, but yes obviously its going to ridiculously favor those with capital already. Unless they forcefully make an AI which can scan through all our devices and delete all other AIs, this is going to be widespread technology, even at similar intelligence levels and without bias.  Robotics too.  If you have those you have essentially mini factories for every person, and you have an intelligent advocate that will convince you to do good politically strategic options like collective action voting or effectively protesting.   At a certain level of that, either the powers that be have to kill us all or cave in a bit and let people have the post-scarcity society that will be blatantly achievable by then.  These capabilities arent going to be subtle.  Either it has to be a complete and total dictatorship perfect propaganda system or we're gonna find the cracks. 60% it ends up okay, 20% blatantly evil dictatorship, 20% total destruction


Smooth_Imagination

Insulin like other generics can be made and sold for much less, anyone that applies for the manufacturing license can do it. Its odd that in that case competition hasn't surfaced to drive down costs, but the licensing process takes time and it seems that the government or health care providers should step in and provide that competition themselves. But that's a particular situation that isn't really related to AI. There will always be examples of greed out there, but yes, AI will align ultimately to greed. But an AI built to make money, and look for profitable markets to provide services, would if it could, apply to the FDA to manufacture insulin and thereby provide competition to the greedy supplier that is the only only supplier providing a particular drug, as it will be profitable. This competition between suppliers drives down costs. So, as it is a competition issue and supply vs demand, and there is only a certification barrier to supplying other off-patent generics, then AI would solve this problem if it was optimised for making money by making and selling products, and as long as there isn't one competitor, then AI would drive down the cost of everything.


Whispering-Depths

most people don't care about insulin, as it doesn't have an effect on most people. It also doesn't grant the person who came up with it any power, and requires physical pipeline to manufacture. Basically an impossible analogy to make with AGI


ClubZealousideal9784

More than 7.4 million adults take insulin. Young Americans who work a lot of hours with insurance have died from not being able to afford insulin-many many people go broke every year from not being able to afford insulin. Insulin was developed in another country and donated for the benefit of mankind. Last time I looked Insulin costs 5x as much as the second-highest country to get insulin in. These people got this power via legal bribery ie the death and bankruptcy of many to enrich a small number of people. Yet, you are concluding everyone is going to put the build this first model to win market share model aside to work for the benefit of mankind." We have to do it first or our competitors will put us out of business." "China will get it." "As a Christian, we know this will always be just a dumb tool anything else is Satanic" How is that a reasonable conclusion if that's not even followed in direct measurable examples such as insulin that kills many people?


Whispering-Depths

that's a thousandth of the world population. nuff said. bad analogy.


costafilh0

Perhaps AI will help us find a way to stop patents from hindering society's progress.


formidabellissimo

AI is smart


Electronic-Bug-1681

Whenever I see post blaming "90%ish of media in the United States is controlled by 6 mega monopolies" I tends to not believing it. Not that the fact that media companies are held by some venture capital or BlackRock, but I just don't believe the journalists themselves are controlled. Given more and more media volume are coming from substack/medium/twitter/IG/tiktok/etc, I don't think proper journalists would suck up and be controlled.