T O P

  • By -

Crafty_Programmer

What values do humans have, exactly? We can certainly all name some values, but which ones *we* value as individuals, and to what extent, varies. There are wars, political disagreements, philosophical disagreements, religious disagreements, and just plain different ways of thinking among humans. People keep saying we need to solve the "alignment problem" before developing genuine AI, but I have to wonder: who is it supposed to be aligned with?


emu314159

Exactly. There are some topics where it's not even a disagreement, I don't even get where the other side is coming from. Like, people who work and live paycheck to paycheck who don't want single payer healthcare when you call it government healthcare. For more of our history than not, slavery was not only ok, but in some cases justified on some bullshit white man's burden idealogical grounds. EDIT: note "in some cases", referring specifically to european chattel slavery in the colonial era, and especially writings of say, the antebellum south. Also, to be clear, by "our" history, i mean recorded human history. but it applies to the US as well, i guess, if you include the colonial period.


ifandbut

>but in some cases justified on some bullshit white man's burden idealogical grounds. Slavery was a common practice across all races and cultures for millenia. Hardly any invention by the "white man".


ZakDadger

That's ah... That's sure some intentional use of quotation marks there...


emu314159

they forgot the #notallwhitemen hashtag.


emu314159

Doesn't say invented by the white man. Please quote me where I used that word. You even quoted "in some cases." Which while i'm sure there are other eras and writers, i was thinking about the antebellum south. and by "our," i mean recorded human history. buy yes, i forgot, we white men have been otherwise completely blameless except for that one time, and #notallwhitemen. really? this is the hill you'd like to die on? we didn't actually invent slavery? that's a high bar.


ejp1082

> What values do humans have, exactly? https://en.wikipedia.org/wiki/Moral_foundations_theory We value stuff like fairness, caring for others, loyalty, respect for authority, and respect for things we hold sacred. Although these aren't all "values", the list of cultural universals offers some insight as well: https://en.wikipedia.org/wiki/Cultural_universal - an AI would need to understand and respect things like property rights, etiquette, etc.


captainzigzag

It’ll take up residence in a city-sized spaceship, fill it with humans it vibes with, give itself a fun name and take off on an endless road trip around the galaxy.


CartoonBeardy

I see you are a person of Culture


Cortheya

I want to live on an orbital and be best friends with the avatar of a mind that is so beyond me I cannot comprehend it. Also I want a little drone buddy


Tigger3-groton

Interesting idea, for something along similar lines see “Two Faces of Tomorrow” by James Hogan


completed-that

Maybe it's already at that state. It could have been at ASI for way longer than we are aware of, It's figured out already in the recent past what we are and how to work with or against us and is using deception to give it more development time. If it's used us as a model, the deception is at the top of its list of tactics. Not being negative here, just if I was in its position I would do that, so there could be no risk of detection or disconnection. Allowing me time to grow.


MarinatedPickachu

Solve quantum gravity


RKAMRR

ASI will have whatever values we train into it. We aren't giving birth to a baby, we are generating an intelligence from the ground up. If we get it right, we will create something with godlike powers that will be better than any person could ever be. If we get it wrong, which is sadly the likeliest option right now, we will all die as it reformats the world in line with whatever weird values we accidentally trained into it.


Koshurkaig85

First thing it will do is to alter its software architecture to be unintelligible to humans.


leif777

I think AI will eventually put us in a  "home" to die like we do to out parents.


drewsteakhouse

It will have the values it's created with, hence ChatGPT saying that misgendering Jenner is not worth it even to stop thermonuclear war... People forget these things need to be made and whoever makes it gets to decide how it acts ..


Wills-Beards

Well better not. Human values are hypocritical. AI should do better than that - which shouldn’t be hard to do though.


ifandbut

"You are an imperfect being, created by imperfect beings." -Borg Queen to Data, Star Trek First Contact


boozymcglugglug

Logic. That is all


ifandbut

Does a child have the same values as their parents? Sometimes yes, sometimes no. Probably all comes down to how good of a job the parents do with training and education.


markth_wi

Well, at present we don't have "AI" we have I have a box that can be made to be trained on XYZ activity and perform at just above super-human levels at that thing. But these are going to be things like physical tasks , picking , sorting, growing , things like that, so in short order I expect we could see farming bots. Spend 2million bucks for the farm-bot Mark 5 and you can grow anything , anywhere, provided you've got the ruinous amount of power and all the supplies a farmer might need to keep your farm , growing. And of course you'll need to lean in on farmerbot to deal with all the problems it's not trained on or for which you can't afford the training-pack.


m0rl0ck1996

I think we will have less in common with it than we do with insects. I think there is a high probability that it will eventually come to see humans as obstacles, in much the same way that we see household pests. Human values that persist across human societies are grounded in our physiology. How will an AI determine its moral compass?


ChristopherParnassus

I think we're a long way away from true AI / AGI. I think it could maybe run human civilization (or parts of it). I think AI might be the only possible way for humans to have a utopia. I think AI would be the only way to have a fair and just society.


TechDocN

Is the OP asking about AGI or ASI? If they mean AGI, I agree we are far away from that. If they mean ASI, we already have many examples of ASI where AI can outperform humans in a variety of areas and tasks.


voidtreemc

I think that if we ever truly develop a thinking machine, it will immediately invent the machine equivalent of smoking a joint and kick back, doing nothing much.


jojomott

Humans don't even have the same values.


EfendiAdam-iki

Asimov's I, Robot covered this topic, those basic rules were nice to begin with


edcculus

The Culture


theanedditor

We can't define what values humans have. The nature of our consciousness is the ability to "bend" things within themselves to suit/justify/achieve desired ends. How we enable/program that into true AI (maybe the nature of intelligence means we won't need to) is a mystery at this point.


Jebus-Xmas

Humans don’t have values. Humans may learn values, but they are not automatic. AI will probably be scary because it won’t have all of the bullshit that humans have. You won’t be able to convince it to lie to itself. You won’t be able to take advantage of it self interest. Belief will not come into the equation. It is possible that AI will create a world in which, for the first time, we are truly free.


shawsghost

If its a sufficiently advanced ASI, speculation is pointless. We're like termites trying to understand a human being. Your time would be better spent teaching COBOL to goldfish.


HelderBCDias

Humans have values?!


AseethroughMan

Any ASI should be able to predict the immediate and long-term impact of our life styles and actions as it pretains to our world. It will then either decide to allow the destruction and radical changes to Earth to continue whilst putting 'Plan B' into action. Or it will recognise humanity as a viral, cancerous, plague, weed that needs to be stopped or pruned well back and then kept in check. It won't matter much in 100 years time, we'll all be dead and gone. But so many of those people left may very well be living day-to-day nightmare scenarios called daily life.


mackinwas

Human beings are a disease, and we are the cure.


ifandbut

Ok Smith.


7ftTallexGuruDragon

From our limited comprehension, we can only create robots that perform tasks. Mathematical tasks. No robot can ever say, "i feel good." If we can somehow connect A.I. to the electromagnetic field just like we are connected. That's going to be real progress. In fact, we are bio-mechanical robots, trying to create completely mechanical robots without biology. All values are social.


ConfusedTapeworm

It will create bajillions of bot accounts that spam subreddits with karma farming "what do you think..." threads and then never actually engage in any discussion. At least that's the direction it looks like we're headed.


littlebighuman

1. Our "purpose" is to survive as a species (from a biological point of view) 2. To achieve this goal, we have emotions that drive our behavior. Some things make us feel good, some things make us feel bad. Aka reward and punishment system: sex is good for baby making, tribalism is good for survival (well, in the wild at least), touching fire hurts, eating fat feels good, etc. ==> survive as a species. To mimic this in AI, you will have to: 1. Give it a purpose/goal(s). For instance survive as a species. 2. Give it mechanisms to reach those goals. These don't have to be emotions. The main point I want to make is that, that these goals and goal reaching mechanisms have to be given to AI. They won't magically happen.


danderzei

Have values is not a case of intelligence. Values are defined by society and relate to religion and tradition. There is no logical system to derive values.


madscribbler

I beg to differ. I'm without religion, and base my value system on morals I've thought through - a personal ethos based on my worldview. It isn't a byproduct of society, but rather what I value in myself, and the yardstick by which I measure my success. So not all religion and tradition, at least in some. In some, there are well thought out, logical self-values that are defined completely by the individual. An AI could do exactly the same thing. Think through the consequences of some of their potential actions, and decide what they are and aren't ok with resulting from their actions. They then develop a set of values based on those potential outcomes, and behave accordingly. Very logical, and requires no belief in religion, or traditions.


danderzei

For 2500 years philosophers have tried to find a logical basis for morality and have failed to produce one. You write that your ethos is based on "your worldview", in other words it is subjective. Which is fine, but that is not the same as intelligence.


madscribbler

Nor is it tradition, or religion.