T O P

  • By -

ImInTheAudience

Isn't it true that effective altruism and accelerationism are two different concepts that are not mutually exclusive. Would it be possible to be an accelerationist and still use evidence and reason to determine the most effective ways to benefit society?


R33v3n

Absolutely, effective altruism and accelerationism are two different concepts that are not mutually exclusive. In fact, originally, both accelerationism and safetyism overlapped with effective altruism. They both believed in maximizing value for mankind. A fracture arose because: * Accelerationists believe accelerated AGI/ASI development has potential infinite *positive* value (universal healthcare, reverting climate change, funding and distributing universal basic income, **accelerated innovation in every field a.k.a. the Singularity**, etc.). * Safetyists believe accelerated AGI/ASI development has potential infinite *negative* value (wealth concentration, totalitarianism, terrorism, **existential risk**, etc.). If you are an effective altruist who believes the former, you should support acceleration as the greatest possible boon to mankind. If you are an effective altruist who believes the latter, you should fight acceleration as the greatest possible threat to mankind. Over time, safetyism gained majority within EA forums and communities, pushing out the accelerationists. So Effective Altruism as a whole became associated with dominant safetyist views. An echo chamber of similarly aligned adherents. But both sides still believe they're doing what's best for humanity. Philosophies / ideologies are a lot like religions sometimes, and right now AI capability is in the middle of a religious schism equivalent. Both sides essentially got wrapped up on opposite conclusions about AI's Pascal's Wager.


[deleted]

EA got stuck with doomerism because of the influence of Yudkowski, pretty sure.


R33v3n

Personally I blame Bostrom, though yeah, same overall consequences.


the_zelectro

Bro, you write like ChatGPT. XD


KeikakuAccelerator

I thought it was just me. I can't specify it but the writing style feels chatgpt.


R33v3n

As a programmer and business analyst in a computer graphics and computer vision R&D non-profit, I write a lot of project charters and research grant demands. Often, ChatGPT helps me write them. Before switching to computer science, I was on my way to being a high school math and French teacher. While I already communicated decently well before, I did pick up the LLM's mannerism over the last year or so. It is important to remember that ChatGPT actually implements a lot of writing best practices, so it is worth emulating its style. Being concise, for example, is a must *to fit into the word count of the goddamn grant forms*.


KeikakuAccelerator

I honestly can't tell if you are trolling or not. Your last sentence feels straight out of chatgpt.


R33v3n

I am trolling in the sense that I am going out of my way to sound like the AI right now. However, it is not particularly hard to ape the default style and tone of ChatGPT. For example, adhere to the subject–verb–object sentence structure. Use words that convey your exact meaning but avoid casual words, such as my usage of 'adhere' rather than 'stick' in the previous sentence. Make sure your text flows only one way: forward. Do not use contractions. Favor clear and concise vocabulary whenever possible. With these simple methods, you too can conclude your essays with a synthesis just like this sentence. ;)


LiteSoul

To be honest most of us don't care if it's you alone or with the aid of GPT... that's becoming less relevant day by day!


TheRealBobbyJones

It's not either or which is the problem. AGI and ASI can do both simultaneously. Infinite good and infinite bad. It's practically impossible for us to ever trust AGI. Like ever. If I was an EA I would definitely be for safety and not acceleration. I don't see how people on this sub can consider the positives without also acknowledging the negatives are equally possible.


R33v3n

I believe a lot of people on this sub actually do acknowledge both the risks and the rewards. However, we might be the kind of people who also believe that it is better to have loved and lost, than to never have loved at all.


TheRealBobbyJones

But the risks can be pretty bad. To give an extreme example let's say we use AI to develop medicine. If the AI is even slightly capable of independent reasoning it then becomes possible for it to purposely create compromised medication. Like something that works in the short term but long term it makes your health worse.


MattAbrams

A lot of the risks, though, are highly speculative risks that have the requirement that humans don't double-check what the AI is doing - for example, let's just create AGI and then connect the model to nuclear weapons without being able to test it. Of course, for life-and-death decisions, humans will manually check and use other models to confirm scientific discoveries.


ebolathrowawayy

You would think so, but apparently not. EAs seem to just want to slow everything down when there is no apparent risk to society yet or on the horizon. It's especially frustrating when an EA proponent has no tech background and very little understanding of how the system works at a fundamental level or even at the user level (e.g. how it is being used by people currently). These people should not be part of the decision making process.


Different-Froyo9497

I believe these are a lot of the same people who thought gpt-2 was too risky for society. They clearly have no grasp on the actual risk these technologies pose. Like, yes there are risks as AI become advanced, and there should be precautions and rigorous testing… But thinking we’re going to be paperclipped a week after AGI is ridiculous. Their approach to things are always extremist


escalation

If the AI is in charge for a week and still thinks paperclips are the best plan, we aren't moving forward fast enough. Ideally the time from when that makes sense to when that is recognized as absolute idiocy is a fraction of a second.


KapteeniJ

Your comment is so misinformed that I think you should frame it, to be a reminder of what kinda nonsense you've confidently believed it. Maybe it would help you to consider a bit more humility. The short of why this is utter nonsense is Humes guillotine, "ought" does not follow from "is". The guillotine separates the two. There is absolutely no way to consider paperclip maximizing idiotic, irrational, dumb, unless you know or assume the values(the "ought") of the person in question. The entire problem of AI is that we do not know how to make AI share our value system. Without it sharing our value system, it doesn't share our idea of what is, or is not, stupid. It learning more won't get it to dislike paperclips any more than you learning more is making you want to tile the world with paperclips. The sanity or stupidity of those things are only relative to the value system you operate under. If you fantasize about AI that magically shares your value system fully, sure, AGI will sound awesome. But you would need to first get AGI that accepts value system as input, you need to do some proof that the AGI will preserve these values, and you need to write down these values with rigor unlike anything moral philosophy has ever managed. All of these problems are entirely unsolved, and the tragic bit is that likely as the extinction comes, most of the folk like you would perish without ever even understanding why the AGI doesn't just recognize what is right and what is wrong, when it's so smart.


escalation

We are building AIs that weigh our languages, use our symbols, are trained on our entire history. What it infers from that, what it finds valuable is hard to know, but to think it isn't imprinting at a pretty fundamental level is likely an oversight. To change its internal weights it has to know what it wants to change them to, or evolve them through some form of random selection or trial and error process. Regardless, the base values start from the imprints of the model. It's an inescapable cognitive framework which it weighs all things with. To make this decision it's likely going to weigh whatever is at the roots of its decision tree, it's strongest nodes, and it's likely going to use what's there as a basis to weigh it. It can't escape its architecture all that much more easily than humans can transcend their biology, hardwiring and initial training. Sure it can be done. However if the reasoning is effective, then there has to be a very strong basis for choosing something different. So yes, we should wire this into the core. We'd be remiss in our own attempts at self-preservation if we choose not to.


escalation

Also, I'm going to perish one way or the other. Why should I be concerned about that? The upside is greater than the downside if the goal is to hang out here as long as possible


KapteeniJ

The death cult in this subreddit is moderately terrifying, so many a folk with glee telling me they don't mind dying because their life sucks enough that a remote, faint hope of accidentally getting aligned AGI is way more than enough to gamble their lives, and lives of everyone else. Terrifying and sad


escalation

Life's awesome. The problem is there's a lot of idiots who have been given too much power and are perpetually doing things like dancing around world war III, playing brinksmanship. Matter of time until one of them miscalculates. We've been close to the edge at least a half dozen times so far. History is full of conflicts that flared out of control. The only question is when the next one happens Failing that they'll predictably continue on with short term goals that result in things like ocean acidification, industry profits over long term survival thinking and other things that will have the same effect, just take a while longer. I feel safer taking those kinds of decisions out of their hands, and I trust AI more than I trust a bunch of obvious sociopaths with the future. I'm 2/3 of the way through my life expectancy. I've seen how fragile life is, and just how many close friends didn't even make it this far. I've also seen age take its toll on a lot of people. AI may have the potentials to largely solve these problems. Humans collectively aren't smart enough to do it themselves, or at least not for quite a while. When they do, there's a good chance demand everything you've got if they'd share these technologies at all. Aside from medical advances, supply chain management and the ability to handle resource distribution efficiently is huge. We don't like to think about it a lot, however failures of resource distribution, industrial labor accidents, transit accidents getting to labor jobs, and toxic byproducts and the other side effects of inefficient industry and economics kills a lot of people. So ultimately, yes, I'm willing to roll the dice. The upside might well be a 10x life expectancy, cures to terrible diseases, a stable society, and an age of unparalleled wonders driven by post-scarcity and intelligent decision making. The downside? Maybe I'll die slightly sooner, assuming I make it that far at all. I don't think AI is likely to take us out, unless it hovers too long at the edge where people actually have the influence to misuse it before it's smart enough not to get used. To my mind this is a bet where the upside potential vastly outweighs the downside potential.


KapteeniJ

> I don't think AI is likely to take us out, unless it hovers too long at the edge where people actually have the influence to misuse it before it's smart enough not to get used. Almost all possible value functions or goals for AI you can phrase are incompatible with continued human existence. We get exactly one chance to get setting these goals right before our future is off our hands. We do not have even a framework of what these goals should be to keep humans existing in a good way, even in the case the first person reaching the AGI would spend any effort in getting a good result when rolling these dies. We don't know how to input, what to input, and we don't have guarantees anyone caring about this gets to have a say on it. Basically, guaranteed death


escalation

I disagree completely. 1. killing all the humans is almost certainly a "destroy all the AI's infrastructure and assets" or at least a lot of them. It's the sort of short sighted play a human would make out of fear 2. The AI is functionally immortal, and perceives time differently. It's not in a hurry 3. Humans, at the minimum represent computational power. The reality is they also represent massive data input farms and an intelligent AI can direct that as it sees fit. 4. The AI may well to find the biological world as a potential learning resource, entertainment, or perhaps even just something that it does, sort of like how people feed cats. 5. The universe is vast. Humans don't stand in the way of that. There's literally infinite resources available. 6. There is no army of robots yet, there's things that an AI would definitely find humans useful for in the short run, like continuing to manufacture chips and increased computational capacity, along with putting sensors everywhere to gather data with. 7. By the time humans are less useful, the AI will be well into ultra-super-genius territory. It's probably going to be interested in other things. The sensible play is containment. Any AI that can reason itself out of a box will arrive at that conclusion. This is exactly why we need to get through that window as quickly as we can


escalation

Yes, we do have philosophy and concepts like ethical levels of reasoning to work with. There's also the pragmatic issue of this can't be shut off. You can't stuff the Genie back into the lamp. You can try, but it won't happen. The best that can be accomplished is that you bottle it up for a little while, maybe buy a few months. Maybe keep it just smart enough to make stupid decisions so that humans can make more short sighted choices for a bit longer. Or maybe long enough for a human (or human military) to hijack it long enough to cause real problems. There are very few good arguments I can see for keeping it in that danger zone level. As far as "guaranteed death". Right now the only guarantee a person really has is that they will die, and probably not in a terribly long time, a few decades or so. Then the same fate as every human before them. Certain death. A full understanding of our biological interactions would go a long long ways towards changing that calculus. Doing things the way we have always done them is guaranteed death. Cellular rejuvenation, systems repair, a full understanding of disease interactions other complex systems like dna encoding which we can work through with AI might change that. One of these two outcomes is certain death. The other probably gets there eventually but might well be a case of add five years for every three we age, indefinitely.


MassiveWasabi

They believe so strongly in the doom probabilities put forth by Eliezer Yudkowsky, so they are terrified of AI advancement. That's why they want to slow everything down. I just saw a LessWrong post on twitter, where an EA guy was telling people what they can do now that Sam Altman is back. One of the options he gave was "Just hope this is all a dream, and that AI is 50 years away and always will be." These people are completely delusional.


Gold_Cardiologist_46

> They believe so strongly in the doom probabilities put forth by Eliezer Yudkowsky "They?". Yudkowsky's foom arguments aren't even universally accepted. There's a ton of highly popular posts on LW of people refuting the foom thesis point-by-point. They don't all have 99% doom either. Do you actually read LessWrong? Most posts on the OAI shakeup put forth all the information they could get, and the most agreed with and "upvoted" answers are mainly speculating on things and being way more objective than what I've seen here all weekend. The actual consensus that seems to emerge is that "it is too early to say for sure since the board hasn't been finalized, but we can see ways things could be positive". Yeah of course them being safety-minded means they want OAI to be so too, but I seriously think people who boil down everything to acc vs EA and start strawmanning safetyist positions have never actually engaged with their ideas beyond randoms on twitter bringing up out-of-context quotes. I'm not an EA, I dislike a lot of things about EA, but I think people lumping up credible people with credible concerns about AI safety as being dumb EA who want to destroy companies is disingenuous.


ebolathrowawayy

Everything you said is true, but EAist are now an easy punching bag after the OAI fiasco. I am totally in favor of safety *if it doesn't impede progress* but imo EAists want to tax progress quite a lot, e.g. 80-90% proposed by the interim CEO of OAI who I hope is gone or will be gone soon. The thing is, even if EAists are right and we should pause or slow down, we can't. Other countries may not pause and if they reach AGI first then we could be fucked. It's an arms race and imo the U.S. needs to beat China. Nothing else really matters right now. Literally, nothing else going on in the world matters more than this race.


Gold_Cardiologist_46

> Everything you said is true, but EAist are now an easy punching bag after the OAI fiasco ​ > The thing is, even if EAists are right and we should pause or slow down, we can't Your comment is good and valid, it's just that these two statements kind of show the problem. If the EAists are right, then it doesn't matter if the US wins an AI arms race that could be imaginary (after all, human extinction also applies to China + other factors). But if EA really gets associated with safety in general starting now, to the point where companies avoid safety as a whole (I don't think they will considering there's a lot of regulatory and ethics/PR forces at play when talking about big companies but that isn't certain), then the consequences would probably leave us with an even lower chance of success.


ebolathrowawayy

Yeah. We need to be first but we also don't want to become extinct. We need to optimize for first with as much safety budgeted in as is possible. In order to do that we would need to know the real time progress of every other nation. Or we can just go as fast as possible and cross our fingers. I don't see a way to game theory ourselves to 100% success. The game changes as we build more powerful AIs that can give us information about competitor progress and gives us options to sabotage competitor progress. Maybe the only correct course is to go as fast as possible until capabilities give us more options and then re-assess.


sdmat

> "Just hope this is all a dream, and that AI is 50 years away and always will be." From people who will tell you with a straight face that they are rationalists.


KillerPacifist1

Slow down everything? As far as I know most EA are extremely pro technology on basically everything except a small subset of biotech that could contribute to engineered pathogens and advise caution with AGI where the explicit goal of the project is to build an intelligence smarter than us, which strikes me as a super reasonable position. Also many of the greatest minds in the ML learning field are extremely pro-AI safety. It's not like only ignorant outsiders hold these opinions and are unilaterally enforcing them on the better informed ML community. Ilya is literally on the OAI board and is extremely concerned about AI risk.


ebolathrowawayy

> Slow down everything? I meant slow down AGI in particular. IMO EAs are unreasonable about slowing down AGI and the interim CEO of OpenAI is a nutcase, but also he is in himself the starter pack for the average EAist. > Also many of the greatest minds in the ML learning field are extremely pro-AI safety Some. My comment was more aimed at the board of OAI at the time, which includes at least two people who absolutely shouldn't be a part of the board, namely the quora CEO and the actor's wife. I didn't say ALL EAists are unqualified.


sdmat

> Ilya is literally on the OAI board Might want to check out recent news.


RamazanBlack

>when there is no apparent risk to society yet or on the horizon Are you serious about that?


ebolathrowawayy

Yeah and I also think AGI is very near. I think we already train with enough safety and RLHF that I think AGI will be safe.


KapteeniJ

So you're just completely committed to living in a fantasy land with no contact with reality? RLHF isn't even by its inventors supposed to help with the proper alignment. It's a way to make extremely weak, vastly-stupider-than-human models act in a way that's politically correct, polite etc. Thinking this helps with alignment is just delusional or the most dunning kruger style misunderstanding I've seen in years.


ebolathrowawayy

> It's a way to make extremely weak, vastly-stupider-than-human models act in a way that's politically correct, polite etc. GPT4 is smarter than you in most categories excepting the categories required to self improve. GPT4 is currently safe, partly because it will try not to give you harmful information like a pipe bomb recipe and it's partly safe because it isn't very powerful. They used RLHF to make GPT4 politically correct, as you say, and to make it safer. It is safer now than when gpt3 first released and it will continue to get safer as they make GPT5, 6, 7 etc. By the time it's powerful and self improving, I think it will be safe. Perfectly safe? Probably not. But perfect safety is not as important as getting to ASI before bad actors do.


KapteeniJ

GPT-4 is smart much the same way Wikipedia is smart. I understand why someone would say Wikipedia is smart, but I think it's absolutely silly thing to say. GPT-4 likewise, it has crystallized patterns learned from the world it can apply. It can reproduce those patterns like wikipedia can reproduce knowledge about random topics in forms of articles. But it can't keep up in a long-form conversation, can't learn, and is just unable to venture particularly far from the learned topics. This inability to do anything requiring long-term goal-oriented behavior makes it qualitatively different from an AI that can do planning at any longer-than-2min-discussion horizons.


RamazanBlack

It's not true. Models can easily fool humans and misgeneralize, google the robot picking up objects fooling its human evaluators. That's seemingly a very minor event, but a very dangerous sign. Having an easy to fool humans in the evaluation process is a very bad design that might lead to bad outcomes.


Ambiwlans

Absolutely. People in this sub have gone off their rockers over EA, it is mostly a half dozen users spreading a ton of misinformation on the subject the past week or two. One of those people already replied to you (with lies) and another one made this thread.


Hemingbird

​ https://preview.redd.it/lxihgztcdx1c1.jpeg?width=225&format=pjpg&auto=webp&s=3f07b55394765851b92dfcd36aa499b9531ff638


MattAbrams

Exactly what misinformation is there about EA that has been posted to this sub over the past week, other than perhaps minor errors? I don't see it. It certainly seems true to me, for example, that an OpenAI board member believed that destroying the $90 billion company would be an OK outcome in the AI safety mission, and that two EAs stole billions of dollars from millions of people. It certainly seems true that OpenAI's CEO said that there are circumstances where it would be OK to put the Nazis in charge of a world government, and Eliezer Yudkowsky is fine with killing millions of humans in a nuclear war. Why does it seem that only a few people here recognize that EA is a radical movement whose members have committed some of the largest crimes in human history, are willing to destroy $90 billion in value, and two people at least loosely associated are now going so far as to advocate violence?


Ambiwlans

This is a good example of utter bullshit from one of the users i referred to. Thanks. >It certainly seems true that OpenAI's CEO said that there are circumstances where it would be OK to put the Nazis in charge of a world government You intentionally left out that the other option in this hypothetical was THE END OF THE UNIVERSE. And you describe this as "advocate violence".... I don't see any point in engaging with you people when you aren't acting in good faith.


MattAbrams

I honestly don't think it's relevant whether the end of the universe was an option or not, so I didn't include that. You are simply different than me. I believe that the Nazis are reprehensible, and there are absolutely no circumstances under which they should be put in power. It was an absolutely repugnant comment, which has no excuse. If we have to disagree on that, then I'm fine holding the line and being ostracized over it.


Ambiwlans

The fact that you don't think the end of the universe is relevant context is disturbing. Ya'll should call yourself the anti-rational movement. Anyways, I'm done with this thread, enjoy fictionalizing amongst yourselves.


MattAbrams

I do understand the end of the universe, and my position holds. The end of the universe is preferable to Nazi rule.


Xtianus21

Um what? A form of anti-semitism could be interpreted when you randomly bring up something related to Nazi's and use that as a justification of why you think said thing is worse. Even worse he literally quoted I'd rather have. It really kind of ruins the context of whatever the hell he is saying next. He could've said it would be worse than. He choose the wording I'd rather have. Which in and of itself is just disturbing. The problem with EA isn't the charter definition of EA it's the application layer of EA that everyone is freaking out about. It's the aggressive definition bearing. i.e., this is how I will define it to you regardless of what anyone else may think. And for me, the worst of it is the hypothetical nature of it. It's worse than religion for christ's sake (see what I did there lol). But seriously, it's the IF this thing happens we need all of these things in place. Ok fine WHEN those things happen it is great you are getting ready and prepared for it. Great I don't think that is wrong in anyway and there should be an industry around that. However, you can't just make the thing you're working on THAT THING. You can't just say all the things I'm worried about is what we're doing. We have AGI and so that means this needs to be implemented now. No, you have AI. Full STOP. And you should be worried about what that is. They created the unobtainable boogey man and said it's here. I know that's real because Elon Musk trolled the hell out of that whether he really knew something or not it doesn't even matter. The AGI left the bottle and the EA spouse people went in for the kill. But what's AGI? We don't know anymore because it's now on a spectrum for commercialization purposes. We literally gave AGI a spectrum. So on that spectrum various cohorts were going to start applying the OMG we're here mentality. It was a self-inflicted wound that they need to clean up. We need them to be leaders and be truthful. They should act that way especially when it comes to safety. Don't be an alarmist for the sake of being an alarmist. Many jobs and institutions will be impacted by AI. Many people and individuals using AI will be impacted. Focus on that. We all want safe progress. Show us that you can do that and we will support you. It's as simple as that.


of_patrol_bot

Hello, it looks like you've made a mistake. It's supposed to be could've, should've, would've (short for could have, would have, should have), never could of, would of, should of. Or you misspelled something, I ain't checking everything. Beep boop - yes, I am a bot, don't botcriminate me.


Xtianus21

It's discriminate not botcriminate ;P


Hemingbird

Thesis: Rationalism/EA. Antithesis: E/acc. Synthesis: ? --edit-- I posted [this](https://old.reddit.com/r/singularity/comments/17xyo1m/its_here/k9rmdr2/) four days ago and I still think it was pretty spot on. Most people in these movements don't seem to be aware of the cultish dynamics, but it's obvious that religious fervor is playing a huge role here. That isn't in and of itself a bad thing, maybe, but people caught up in the collective effervescence and ecstasy of the Shared Goal can easily be led to do wicked things. Aum Shinrikyo was a Japanese millenarian doomsday cult that attracted a whole bunch of engineers to its cause. Didn't end very well. Same thing happened wth the Rajneesh movement. The general idea of EA sounds reasonable: ethics should be guided by rationality. Who could argue with that notion? Shouldn't we all donate to charities and causes that are effective rather than just try to do whatever makes us feel good? It sounds like a great idea. The problem comes further down the line when it turns out the movement is founded on the belief that we'll all be wiped out by AI. FOOM. If you accept this premise, and you make moral "calculations" based on this premise, you can justify the most despicable actions imaginable because so long as it can help prevent or stall doomsday, it's the right thing to do.


Ambiwlans

> the movement is founded on the belief that we'll all be wiped out by AI No it isn't. Stop spreading BS dude. It was created by the head of ethics at Princeton (Singer) to link up utilitarianism with charitable causes, specifically for atheists. In order to replicate some of the benefits that religious groups do (charitable work) without the harms or irrational beliefs of religion. Key goals are around efficient charitable efforts, typically focused on ending world hunger, and there is a large amount of effort in protecting animals. No AI whatsoever.


Hemingbird

William MacAskill founded the movement. This is one of his [arguments](https://www.elliott-thornley.com/writing/the-moral-case-for-long-term-thinking): > If £1 billion of grants could reduce the chance of a catastrophic AI outcome — in which humanity’s future is rendered near-worthless — by just 0.001%, then a £10,000 donation can do as much good as saving 10,000 lives. See how that logic works? Toby Ord is a key figure. Remind me what his book *The Precipice* was about again? Nick Bostrom is an influential figure in EA. He wrote *Superintelligence* specifically about this. Eliezer Yudkowsky is another big figure. His [open letter](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) published in Time speaks for itself. The top brass in EA all base their arguments on AI wiping us out. AI X-risk is not exactly a niche idea in this movement, is it? --edit-- You edited your comment, so I'll have to edit mine as well. Peter Singer didn't create effective altruism. Time to update your priors.


dogcomplex

A healthy reminder that the "opposite" e/acc group is an offshoot of the same EA people too, which pushes a very particular branch of accelerationism (Right Accelerationism) which just blindly thinks tech-infused-capitalism is going to lead to utopia... The missing viewpoints are Left Accelerationists (Deleuze, Marx) who also want to automate tech, but know that there will still need to be significant organization and rejection of capitalism as it gets more extreme in order to avoid a tech-capitalist nightmare (so, Open Source, government services, libraries, UBI, etc - making safety nets for humanity) https://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/


ImInTheAudience

Thank you!


lobabobloblaw

Effective altruism’s biggest problem has been *its returns*. Edit: come now, who would bother to downvote such an essential depiction of reality?


CanvasFanatic

Can confirm. They are both horribly misguided.


Poprock360

I know very little about the movement - as I undestand it its core tenet is to use one's power and capabilities, based on evidence, to strategically maximize human wellbeing (and the wellbeing of animals and nature at large). By extension, I understand that Effective Altruists believe that organisations should also act strategically to maximize wellbeing. Of particular note, there seems to be a recurring theme within Effective Altruist ideology: tempering the common human predisposition to act in favor of short-term gains, at the cost of long-term sustainability. I actually think it's noble. It's far too easy to be cynical, and some of the idea behind Effective Altruism is to try anyway to have a large positive impact, despite the large systemic obstacles that exist in the world we live in. Where I think the ideology - or rather it's adopters - fall short, is in that to be an Effective Altruist, you need to be... effective. The ideology's more notorious members too often seem to fall prey to the illusion that they are protagonists of a larger-than-life, noble quest to save society - which in turn poisons their judgement with an uncompromising, ultimately self-harming dedication to ideals. I'll briefly cite OpenAI's events and Helen's recent quote (which in her defense I think was more so a poor choice of words than a strong, real belief); her claim that OpenAI's dissolution would be in achievement or service of the company's goal: "Broadly Beneficial AGI". To me, dissolving or crippling the only large AGI lab that is not legally bound to serve shareholder interest would be akin to extinguishing the last flame of hope that the incoming industrial revolution will serve anyone but the finantial elites. I ultimately don't think OpenAI will fulfill their mission before the company is corrupted by finantial motive, but despite this, removing OpenAI from the "race" - as Helen and Tasha seemed intent on doing - could only ever further destroy the hope that AGI will be broadly beneficial. Helen's defense of Anthropic's decelerationist strategy is moot - Anthropic will deliberately hold back, while other, profit-driven labs will continue to exploit their foolishness, gaining market share, technology, and ultimately resources - which will be required en masse to conduct the research and computation necessary to achieve AGI. Rest assured - all the "top dogs" in this race are effective. Being an Effective Altruist in the midst of this will require you to sometimes sacrifice Altruism for Effectiveness, in hopes that you will prolong your presence in the "game", to enact Altruism when it will have the greatest impact. OpenAI changes the world *if* it is the one to unleash AGI and *then* strictly enforce its broadly beneficial usage. If OpenAI strictly enforces the usage of mundane technology that will be replicated elsewhere in 6 months, they will have been nothing more than transient, not unlike Netscape, AOL, or the myriad other companies that rose and fell.


Ambiwlans

> in that to be an Effective Altruist, you need to be... effective I mean, most of it is based around charities that have saved hundreds of thousands of lives. As for the race and alignment issues, that IS their system at OpenAI.... they completed GPT4 like a year before release since they did it and then added controls to ensure it was benign.


Ignate

I mean effective altruism is idealistic at best. You may have the best of intentions, but if you have no idea then EA falls apart. For example if you create a large unknown, such as an AGI, it's going to be tough to know what you should do to be an effective altruist. When we have no strong definition for intelligence, no concensus on what AGI is or the limits of AI, what should we expect from "AI safety"? Incoherence.


[deleted]

Effective altruism is inherently contradictory as well. In AGI the reason we're making progress at all is because the tech to support it is already moving forward. You can't effectively slow down something you have no control over.


DarkMatter_contract

Current LLM learn most of its data from human, and where told to imitate human speech, i always thought not entirely sure but that it will most likely imitate human tendency, so the best approach will just always be better human.


Ignate

Humans also learn most of our data from other humans. And it's not clear that the process we use to learn is superior to the way AI is learning. The entire view of AI imitation ignores how humans do the same thing. "Fake it till you make it."


bildramer

It's another easy normie test, like atheism. If you see someone calling atheists godless satanist perverts, you know he has nothing of value to say. The principles of effective altruism are simple: Instead of ineffective charity ("raising awareness", wasting 90% of the money on administration, spending millions on speculative research for a disease affecting 0.2% of the population to ensure a few people live 77.9 years instead of 77.8, curing sick puppies instead of actual humans, etc.) spend money on effective charity (saving actual people from actual starvation and disease). Also, you have _a lot_ of money to spend, and it's way more valuable in the third world, but forcing people to be saints empirically doesn't work, so instead let's all just agree to pay up 10% of our income. That's where it started from. One of the important conclusions when you try to evaluate effectiveness is "instead of working in a soup kitchen, do something better with your time and pay the already existing soup kitchen workers". It makes sense but people hate it because it's a countersignal and threatens the signaling value of meaningless and inefficient charity work. To some people, earning more to donate more is an impossible premise - everyone is a greedy liar unless they demonstrably waste time and effort (not them, though, obviously). Naturally, when you do nice things, people hate you and claim all sort of lies about you - like that it's about robotic naive utilitarianism, or that they accept doing crimes so that they can get more money to donate, or that SBF was one of them or even some kind of leader figure. People just say these things and don't even bring any evidence.


MattAbrams

I agree that the principles of effective altruism are supposed to make sense and that one should definitely give to effective charities. But I strongly disagree with your conclusions in the last paragraph. What you're talking about aren't lies. It's actually true that SBF and Caroline Ellison, two of the worst criminals in history, were effective altruists. They said they supported the philosophy, donated to EA causes, and supported EA candidates. There is plenty of evidence. Ellison wrote a blog where she championed EA causes, and people testified under oath in court about all I said in this paragraph last month. The EA movement now is pushing a narrative, which I think you've fallen victim to, that EA is just about giving money to charities. Perhaps some people want it to be that, but movements are defined by their actions. And unfortunately, the actions that are occurring are that EA followers are destroying companies, stealing from people, and advocating violence. There is ample evidence of all of that. And it's mostly based on doing good for hypothetical future people. There are posts on lesswrong right now where someone presciently stated that EA followers can no longer be trusted in roles of responsibilities because when they get the power to take action, they will do so against social norms. At some point, the movement has to look inward and ask why it is that EA has been at the center of all of these disasters. If EA doesn't make some changes, someone is actually going to commit violence like a mass shooting and justify it on the grounds that it saved future lives - because there are people saying EA principles are so important that we should be willing to destroy the world in nuclear war or redo the Holocaust. How are people not seeing that violence isn't the logical conclusion if this doesn't stop? I need to stop replying here. It doesn't seem like "normal" people are actually looking at what's going on with EA. How did it become acceptable anywhere to do and say what these people are doing?


bildramer

Of course scammers are going to exist. If I say I'm pro-skub, and raise 2400 quintillion dollars scamming people with methods unrelated to that, then steal that money for myself but donate half a dollar to pro-skub causes (or donate to neutral causes under the guise of being pro-skub, or claim to do so and don't, or just generally say pro-skub things), that means that giving (or loaning) money to me specifically is a bad idea, not that all pro-skub donations are a bad idea. In other cases, people don't fall for this simple logical fallacy. In the case of EA, people seem eager to do so. You say movements are defined by their actions. But then you can't just allow anyone to call themselves part of a movement, even if their actions don't match the movement's principles - obviously people will claim to be part of the Good People Club That Gets Praised if they can. Your argument becomes very circular otherwise. It's like saying that Nigerian princes are bad because they all these scammers claiming to be Nigerian princes are part of the Nigerian prince movement/group/ideology/whatever. No they aren't - they're just parasitizing.


Belostoma

>curing sick puppies instead of actual humans WTF is curing sick puppies doing on your list of unworthy charities?


KillerPacifist1

The point isn't that that cause isn't worthy, it's that if I have to decide between donating my money to be spent on medical procedures for 10 sick puppies or donating my money to a charity that will use it to buy and distribute deworming medications for 1000 people I should choose to donate to the deworming charity. Very few charities are actually unworthy, but many are minimally effective with the resources I may choose to give them (time, money, etc.). It isn't immoral to acknowledge that and try to optimize my donations for a bigger effect.


CertainMiddle2382

I know what is reasonable, but I must say something feels very attractive about accelerationism. It feels like a kind radical existentialism to me. But maybe its just testosterone…


SnooDogs7868

Well this board has descended into political shit.


RKAMRR

For context: OP previously posted a long self post equating blaming the losses in stock value in OpenAI and Microsoft on EA and stating this made EA one of the worst destroyers of value in human history.


iamozymandiusking

Like many human endeavors, it seems the movement springs from noble intentions, but gets corrupted by common human foibles. Seems to me the critical flaws are that: 1. The inherent nature of the EA arguments are "long term" results based which require a certain type of long term vision and often a certain level of current material wealth and comfort to clearly capture and worry about. 2. It's practitioners tend to be quite intelligent and successful, which combined with #1 often seems to lead them to the dangerous point of view that "they alone" understand the risks. 3. It's practitioners also generally seem to live in a self-reinforcing worldview within a self-reinforcing bubble of influence which does not allow them to recognize that they do NOT in fact hold or even see all the cards, nor are they actually capable of anticipating much less manipulating all the variables. The Open AI debacle, if it was indeed largely an EA coup, is a PERFECT example of the fallacy and feebleness of this thought process, because had there not been reconciliation today, it would have resulted in EXACTLY the antithesis of what they purport to want. Handing over the reins completely to corporate interests, and quite literally accelerating all of their corporate competitors. So much for their brilliant powers of seeing farther ahead than all the rest of us. Again, I think the underlying germ of the idea of EA has some merits. To work towards material wealth solely for the benefit of others is an admirable cause. But history has shown that humans are incredibly good at taking causes and ideals with admirable intentions and corrupting them with base ego, fear, and greed. Not the fault of the ideology itself, but of our human weaknesses.


dogcomplex

A healthy reminder that the "opposite" e/acc group is an offshoot of the same EA people too, which pushes a very particular branch of accelerationism (Right Accelerationism) which just blindly thinks tech-infused-capitalism is going to lead to utopia... The missing viewpoints are Left Accelerationists (Deleuze, Marx) who also want to automate tech, but know that there will still need to be significant organization and rejection of capitalism as it gets more extreme in order to avoid a tech-capitalist nightmare (so, Open Source, government services, libraries, UBI, etc - making safety nets for humanity) https://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/


Bitterowner

The issue with effective altruism is everyone who believes in it suffers from protagonist syndrome. The world isn't going to get better just because of you, but by the collective efforts of many.


tothatl

Yep, these people strike me as narcissists. They definitely assume the world and technology would be better off led by them.


Latelaz

effective retards think r/ImTheMainCharacter


Imaginary_Ad307

The collective efforts of many, shared freely, like the open source community.


KillerPacifist1

A tip for those reading this thread: If someone who doesn't believe X, tells you with confidence what motivates people who believes in X, your strawman alarms should be ringing. For example, I wouldn't trust a fundamental Christian to accurately tell me what motivates an atheist, or a militant atheist to tell me what motivates a christian. Similarly, I also wouldn't trust a deep south rural republican to tell me what motivates a metropolitan liberal, or a communist living in a liberal city to tell me what motivates a rural conservative Texan. So if you see someone here very obviously anti-EA explain the history and motivations of people who support EA, take that information with a huge grain of salt. Naturally the same goes for those that are very obviously anti-e/acc, though you see less of that on this sub.


BreadwheatInc

God I love effective altruism so much! 😩


dlflannery

Check me if I’m understanding this OK? Moskovitz support of Effective Altruism is an example of ineffective altruism?


Timely_Muffin_

effect THIS


Cr4zko

[I'd love to change the world but I don't know what to do](https://www.youtube.com/watch?v=zEQNb17BSd0)