"To see the world as it is, rather than as I wish it to be."
I work for the EA research nonprofit Rethink Priorities. Despite my official title, I don't really think of the stuff I do as "research." In particular, when I think of the word "research", I think of people who are expanding the frontiers of the world's knowledge, whereas often I'm more interested in expanding the frontiers of my knowledge, and/or disseminating it to the relevant parties.
I'm also really interested in forecasting.
People may or may not also be interested in my comments on Metaculus and Twitter:
Metaculus: https://pandemic.metaculus.com/accounts/profile/112057/
Twitter: https://twitter.com/LinchZhang
There are strong theoretical reasons against corporations investing optimal amounts in job-general training, fwiw.
2 questions re non-EA utilitarians:
1. Is there an active non-EA utilitarian community?
2. If so, are they at all online or is this a mostly offline community?
I ask because my impression was that all the early online utilitarian forums got swallowed by EA (eg Felicifia, see also this poll of the main utilitarianism FB group)
I find myself pretty confused about how to think about this. Numerically, I feel like the top is at most 3%, and probably more like 1%ish?
Some considerations that are hard for me to think through:
Reasons why I think the cutoff might in practice be higher:
Reasons why I think the cutoff might in practice be lower:
On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I'm pretty unsure about this.
I expect the costs to not be very high but also the benefits to also not be very high.
Yeah one solution for this is for the dashboard to only update once a day.
One point of reference to note is that Bill and Melinda Gates Foundation had about 240 million in "management and general expenses" for about 5 billion in "total program expenses" (which I assume is grants made but I haven't checked). Open Phil is relatively lean right now, but if the EA community hits the point where the community is granting several billion dollars a year, it might make sense for our grantmaking institutions to also be operating at >100M/year scale.
Another point of reference is that a number of US universities each spend >1B/year on research. Now they probably have large sources of inefficency and ways costs can be cut, but otoh probably also have ways that productivity can be increased by spending money.
And in terms of scope, it does naively seem like EA has enough sufficiently important questions that the equivalent of a single top-tier US university should not be quite enough to solve all of them.
So on balance I'm not convinced that EA research is not something that can't use up greater (potentially much greater) than 100M/year, but of course our current research institutions are not quite designed for scaling*
*see scalably using labor
I agree this is a big issue, and my impression is many grantmakers agree.
Hmm I'd love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream!
Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's the most relevant benchmark - more of a lower bound.
I wonder if this is better or worse than buying up fractions of AI companies?
In some ways, meta seems more straightforward - the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?
I think I agree, but I'm not confident about this, because this feels maybe too high-level? "1 unit" seems much more heterogeneous and less fungible when the resources we're thinking of is "people" or (worse) "conceptual breakthroughs" (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.
I think dollars are much more fungible than careers, so for most people, you should move your donations away from global health if and only if you believe that marginal donations to other charities are more cost-effective. "Neglectedness" is just a herustic, and not a very strong one.
Epistemic status: Moderate opinion, held weakly.
I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and "quality standards/"
In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).
In longtermism I think the situation is murkier. There's no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what's scaling looks more like "90th percentile climate change intervention" vs "has a plausible shot of being the most important AI alignment intervention."
In animal welfare it's somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to existing corporate campaigns is premised on there not being lots of $s that'd flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.
Meta is at least as confused as the object-level charities because you're multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less.
Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.
I'm inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I'm not sure where the intuition differences lie.
Re your third point: I find it plausible that both startup earnings and explicit allocation of research insight can to at least some degree be modeled as a tournament for "being first/best," which means you have a pretty extreme distribution if you are trying to win resources (hopefully for altruism) like $s or prestige, but a much less extreme distribution if we're trying to estimate actual good done while trying to spend down such resources.
Put another way, I find it farcical to think that Newton should get >20% of the credit for inventing calculus (given both the example of Leibniz and that many of the ideas were floating around at the time), probably not even >5%, yet I get the distinct impression (never checked with polling or anything) that many people would attribute the invention of calculus solely or mostly to Newton.
Similarly, there are two importantly different applied ethics questions to ask whether it's correct to give billionaires billions of dollars to their work vs whether individuals should try to make billions of dollars to donate.