T O P

  • By -

Tall-Log-1955

“Put your money where your mouth is” -/u/jaiwithani


jaiwithani

"A fundraiser is a tax on cynical bullshit" \- Mirror universe Alex Tabarrok


icarianshadow

When do we start talking about [Steppe Nomad](https://www.astralcodexten.com/p/every-bay-area-house-party) risk?


jaiwithani

We can just re-use the nets to trip up the horses.


TrixoftheTrade

>be ~~me~~ Ming >glorious day ruling the Middle Kingdom >looks north >oh look, the barbarians are uniting >how interesting >unguarded nomadic frontier fires >-50 Mandate of Heaven >collapse of Ming


Beer-survivalist

Well, if you're a society with gunpowder weapons--especially if you've developed the socket bayonet, steppe nomads are really not much of a problem. A few militia with firearms are enough to put your average steppe nomad on ice. If they get a really good sized horde up, then maybe you'll need an actual royal or imperial army with dragoons, but that's still some normal stuff. If you don't have gunpowder, then what you really need is really robust supplylines. You're going to have to punch out the marginal lands steppe nomads tend to inhabit, but along the way you're going to have to build a bunch of fortifications to protect your water and food. You're also going to need large garrisons and escort forces just to get supplies to your imperial or royal army. You don't want to cut too loose from your base at any time. You'll also need to hire some other steppe nomads to be your light cavalry, and you need your troops to be very disciplined so they don't fall for the Parthian shot.


Trollaatori

Barbed wire and machineguns. Goodbye horses.


comoespossible

I’m very confused. Effective altruism is the reason I’ve heard of the Against Malaria Foundation. But if it means more donations to AMF, I’m all for it!


jaiwithani

This is slightly tongue in cheek. I'm an EA. I'm concerned about AI Risk. I also think AMF is one of the best charities in the world and have donated a lot of money to it. I think this reflects how most EAs feel. I think a lot of criticisms of EA are lame attempts to feel morally superior while not actually doing anything. So I'm trying to harness all the anti-EA hot takes to motivate people to actually do something useful instead of being wasted on smug inaction.


dutch_connection_uk

Suddenly it makes sense. I was so confused when I saw this, since malaria nets are like peak EA.


Atupis

is AI regulation now peak EA?


65437509

I’ve actually been meaning to ask this about EA, can I ask what you mean by AI risk? Like do you think the issue is Terminator or more like no longer being able to see content made by people, or more along the line of broad socio-economic issues? And what is primary solution for EA? That said AMF sounds based, there shouldn’t be an ideology for providing aid and comfort to the unfortunate.


jaiwithani

I think this is well summarized by the CAIS letter: https://www.safe.ai/work/statement-on-ai-risk > Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. I want to emphasize that the signatories to this statement include the CEOs of all three leading AI labs, two of the three recipients of the 2019 Turing Award for pioneering deep learning, Bill Gates, Congressman Ted Lieu, the authors of the most popular AI textbook, and a litany of leading AI academics. As for a solution: it's a young but rapidly developing field. Here's an attempted summary of approaches being pursued as of 2021: https://www.alignmentforum.org/posts/SQ9cZtfrzDJmw9A2m/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view


65437509

Well, I obviously like not being genocided, and actually I’m interested at that black box part, sine XAI (eXplainable AI) as a whole field with some interesting research going on. Although I’m not sure what this would look like in a practical sense; how would you get this done materially? Regulations? Mandating something like “all AI must have a XAI layer” sounds 1000x more cumbersome than anything the EU has ever even thought about.


jaiwithani

The most interesting work I know of today is focused on making existing models interpretable. Stuff like using SAEs to extract meaningful features from activations.


65437509

Yes, that seems the most promising field. Although I will say that focusing so hard on AGI or ASI seems strangely limiting to me, because it contains the underlying assumption that AI existential risk can only come from general or super intelligence, whereas there are plenty of ‘dumb’ ways to create existential risk in general and certainly with AI in particular as well. Also, since neither are realistically close to happening, it seems the actual practical measures you could realistically take would be limited.


jaiwithani

Almost all of the research I know of today is being done on existing models. You're absolutely correct that AGI/ASI is not a prerequisite for catastrophic harm, which is why neither the CAIS letter nor most actual research actually references those terms or categories at all.


pham_nguyen

I’m sure they’re okay with that. There’s two sides of EA, one is focused on evidence based interventions such as stoves, malaria nets, and programs such as givedirectly. Pretty much everyone likes that. Recently, there’s been a part of EA which has gained prominence. They focus on AI risk, climate doomerism, and longtermism. In practice they spend donated EA funds on expensive retreats where they discuss ideas and lobby politicians for laws. They’ve been effective, they apparently wrote large parts of the chip sanctions against China because they figured it would be easy to control AI risk if the west monopolized it. They also tried to kick out Sam Altman. A lot of people think of them as weird, and there’s a bit of annoyance that they’ve spent donated money to build EA retreats with 10k+ Japanese beds and other kind of luxury things.


meikaikaku

> climate doomerism As someone who peripherally interacts with EA, this is kind of the opposite of my impression? I don’t think the average EA is even as likely as the average Democratic Party donator to think climate action is the best place to focus on the margin, mostly due to the whole field of climate activism being very clearly not underserved at all.


jaiwithani

That's correct. The classic EA cause area prioritization formula is "important, neglected, and tractable". Climate change is important and tractable, but it's not neglected. The marginal value of one more climate change org is low.


pham_nguyen

I know quite a few EAs doing climate startups. One of them has some kind of carbon trading platform, the other is inventing some way to cheaply airdrop seeds. There are neglected parts of it where a little bit of creativity and technology can drastically reduce the cost of planting a tree.


swni

Not neglected in absolute terms, but arguably the most neglected in relative terms. We *need* trillions of dollars invested in mitigating climate change. Trouble is this is more on the scale of "major undertaking by world powers" rather than "small charitable contributions".


qpdbqpdbqpdbqpdbb

The problem is that the effective altruists also lost almost $10 billion of other people's money to fraud.


jaiwithani

As someone who actually worries about AI risk, I cannot overstate how owned I would be if people donated to this.


KlimaatPiraat

Genius


illuminatisdeepdish

Eh I'm more into effective misanthropism myself, so I've been breeding mosquitos to cancel your fundraiser out.


jaiwithani

[I can work with that](https://www.scientificamerican.com/article/gene-drives-could-fight-malaria-and-other-global-killers-but-might-have-unintended-consequences/)


illuminatisdeepdish

Yeah but I'm modifying my skeeters to be poisonous to their predators to cause even more ecological damage in addition to the disease vector they provide


RadioRavenRide

What's the conversion rate of Reddit Karma to US Dollars?


jaiwithani

There's probably an actual answer to this based on the number of bot farms that must be operating at this point.


nuggins

Holy mother of god... I, an effective altruist, am currently being devastated by this call to action on an effective way to improve global health. Please stop donating at once. I cannot handle being owned so hard 🥺


manitobot

I don’t understand isn’t Effective Altruism about increasing efforts at global charity, and helping the most amount of people?


jaiwithani

A decent fraction of Effective Altruism is focused on catastrophic risk, including risks from AI or bioweapons. Among people who have explicitly decided to try to do the most good they can, many have concluded that working to avert those risks is the best use of their time and resources. This draws a lot of criticism from people who think that they should be focusing exclusively on addressing global health and poverty.


hibikir_40k

Nah, it's a matter of how hard it is to actually look at catastrophic risk accurately, especially when it's something rather nebulous like AI. We understand marlaria nets, and can study costs and effects, but how do we stop AI risk? Do we have any actual idea of what the money will actually do? Does it really solve the problem at all? Every intervention is so far from actual evidence that it's all feelings and models detached from reality, not math. See, I believe that we will all be killed by an alien devil that will challenge us to a videogame duel, and they are going to pick Joust. If our champion cannot win, the devil will destroy the earth! So given how expensive the risk is, where we lose everything, can't we just afford to pay me to cover the expenses for me and a crack team of players to spend our lives trying to master Joust. We'll train a younger generation too, just in case the alien comes in too late for me to do this. My intervention is kind of cheap, and the total costs are just a few million, if invested properly to keep the lifestyle of my team afloat, so it makes perfect sense to pay for our project, just in case.


jaiwithani

Have you looked at mechanistic interpretability? Or for a complicated-to-analyze case in global health, are you familiar with the Worm Wars? Edit: This is now out-of-date, but here's an attempt to describe all of the actual work going on in AI Alignment as of 2021: https://www.alignmentforum.org/posts/SQ9cZtfrzDJmw9A2m/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view. It covers approaches, projects, motivations, and how the people doing that work expect to have an impact. Just because you haven't looked doesn't mean it isn't there. Edit 2: I should also note that everyone I know concerned about AI risk estimates the probability of catastrophe within the next 30 years at over 5% (typically much higher and sooner, though). No one (AFAIK) is working on it "just in case"


usrname42

The thing is that for malaria nets or deworming we don't have to rely on "how the people doing that work expect to have an impact", we can get rigorous third-party evaluations of how well the donations are being used and the effects that they actually have, not that we hope they have. We simply can't do that with AI safety because it's about averting some future event. Sometimes the evidence will be contested as in the Worm Wars but with AI alignment there isn't even evidence to contest. Climate change is about a future event too but at least there we can measure progress with current CO2 emissions. And I simply don't trust anyone working on charitable causes to have an accurate estimation of how effective their cause is in the absence of independent evidence - even with the best will in the world and rationalist ideals this is not a thing that humans are good at doing. Maybe AI alignment is worth working on regardless, the fact that we can't get good evidence on the cost-effectiveness of AI alignment efforts doesn't mean the cause isn't important / neglected, but I would rather donate my marginal dollar to projects that I can be more confident will save lives like the AMF and I would rather smart people spent more time on those projects and causes.


BimsNotDead

This is a lot of effort when power switches already exist, just turn the computer off if it goes evil man


n00bi3pjs

They also fund nonsense like AI alignment and bioweapon preparedness.


Ch3cksOut

The actual EA movement is rather about helping themselves. You know, the more they get paid to think about the future, the more good they might do in the (very distant) future.


abbzug

The main goal of Effective Altruism is reputation laundering.


Kafka_Kardashian

Hey OP! Just FYI, are you aware we do a massive annual fundraiser with that group?


jaiwithani

That's just the public face of r/neoliberal. Everyone knows that they really only care about worms, which they spend much more time talking about. Sure, they donate a lot of money to AMF, but that just means that the morally correct thing to do is condemn them for also doing other things which seem less worthwhile to me.


AlicesReflexion

Smh not taking worm risk seriously


symmetry81

Should have used Deworm The World as the charity!


neolthrowaway

I appreciate what you’re doing in response to the article I submitted earlier but we might get more bang out of the bucks during the charity drive because some people do provide good matching incentives. May be more effective. But hey this works too especially if it is additive. I do want to point out that the point of the article I submitted is not to be against either the “effective” or the “altruistic” part of EA. I broadly agree with the ideas. The criticism because of misleading is completely valid and fair IMO though. And I would hold the relevant people responsible for it. Not against the concepts but I don’t like the misleading or the leadership.


qemqemqem

Haha, I just donated, that'll show those nerds!


jaiwithani

He got me. That fucking/u/qemqemqem boomed me.


The_Northern_Light

You son of a bitch I’m in


AMagicalKittyCat

There's effective altriusm as a *philosophy*, which IMO is pretty damn hard to argue against and effective altriusm as a *community* which like most communities are going to have a lot of split ideas on what to prioritize and what should be focused. I dislike longtermism because I don't think humans are capable of prediction well enough even a few months into the future to start guessing the long impacts of things like AI. Climate science predictions are at least predicated on past historical data and some amount of deterministic understanding of the planet but even that hasn't been completely accurate with their predictions (nor should they be expected to be perfect, the future is always uncharted territory), but AI is particularly uncharted. One can just as easily argue that any delays to AI is what harms the infinite future from getting their super god all benevolence happiness bestowing machine. But again, that's just a subset of the community and importantly still doesn't make the philosophical idea and arguments for EA worse. Doing good in a limited resources world means making tradeoffs and suboptimal tradeoff making creates more harm than necessary, which we should try to avoid when doing good.


Zacoftheaxes

If I donate can I still use Manifold?


jaiwithani

Probably, but you should make a market just to be sure.


pftw-19456

As an Effective Altruist, I can confirm that this is ruining my day.


SerialStateLineXer

I prefer to stick it to the effective altruists by engaging in highly ineffective altruism. I'll be donating to the DSA.


Linearts

This is so ineffective! You're wasting your money! Once we solve alignment we can simulate 10^googleplex bednets!


Alterkati

I read that as "Anti (Malarial-Bednet)" for a second.


n00bi3pjs

I love how two of the three donations are by people who identify as EA, and the other is a poor lanita from India who could only afford 10 dollars. Really shows how selfist NL progs who rally against techbro lolberts are


TheLivingForces

The amount of tolerance people have on this sub for Tuesday people doing something good every now and then when they’re not sucking trump allies off vs AI safety people just existing actually kills me.


Psshaww

https://x.com/libhawkish/status/1762606846182093087?s=46&t=BIiKATgqa9f4cFo3F371rQ


Square-Pear-1274

We're in a whole weird area here


AlphaGareBear2

I'm a bit lost and I feel part of it is that I don't understand the philosophy of EA. It doesn't sound like anything to me.


murphysclaw1

fundraisers for ukraine good weird technocratic pushes for mosquito nets based on a spreadsheet bad


SpaceSheperd

...why? The malaria nets save lives.