Linch

"To see the world as it is, rather than as I wish it to be."

I work for the EA research nonprofit Rethink Priorities. Despite my official title, I don't really think of the stuff I do as "research." In particular, when I think of the word "research", I think of people who are expanding the frontiers of the world's knowledge, whereas often I'm more interested in expanding the frontiers of my knowledge, and/or disseminating it to the relevant parties.

I'm also really interested in forecasting.

People may or may not also be interested in my comments on Metaculus and Twitter:

Metaculus: https://pandemic.metaculus.com/accounts/profile/112057/

Twitter: https://twitter.com/LinchZhang

Wiki Contributions

Comments

Denise_Melchin's Shortform

Re your third point: I find it plausible that both startup earnings and explicit allocation of research insight can to at least some degree be modeled as a tournament for "being first/best," which means you have a pretty extreme distribution if you are trying to win resources (hopefully for altruism) like $s or prestige, but a much less extreme distribution if we're trying to estimate actual good done while trying to spend down such resources.

Put another way, I find it farcical to think that Newton should get >20% of the credit for inventing calculus (given both the example of Leibniz and that many of the ideas were floating around at the time), probably not even >5%, yet I get the distinct impression (never checked with polling or anything) that many people would attribute the invention of calculus solely or mostly to Newton. 

Similarly, there are two importantly different applied ethics questions to ask whether it's correct to give billionaires billions of dollars to their work vs whether individuals should try to make billions of dollars to donate.

What is the closest thing you know to EA that isn't EA?

2 questions re non-EA utilitarians:

1. Is there an active non-EA utilitarian community?

2. If so, are they at all online or is this a mostly offline community?

I ask because my impression was that all the early online utilitarian forums got swallowed by EA (eg Felicifia, see also this poll of the main utilitarianism FB group) 

Denise_Melchin's Shortform

I find myself pretty confused about how to think about this. Numerically, I feel like the top is at most 3%, and probably more like 1%ish?  

Some considerations that are hard for me to think through:

  • The current allocation and advice given by career EAs is very strongly geared towards very specific empirical views of a) the target audience of who we actually talk to/advise, b) what the situation/needs of the world looks like (including things like funding vs talent overhangs), and c) what we currently know about and are comfortable talking about. So for example right now the advice is best suited for the top X%, maybe even top 0.Y%, of ability/credentials/financial stability/etc. This may or may not change in 10-20 years.
    • And when we give very general career advice like "whether you should expect to have more of an impact through donations or direct work", it's hard to say something definitive without forecasts 10-20 years out.
  • The general point here is that many of our conclusions/memes are phrased like logical statements  (eg claims about the distributions of outcomes being power-law or w/e), but they're really very specific empirical claims based on the situation as of 2014-2021
  • Are you (and others) including initiative when you think about ability? This is related to smarts (in terms of seeing opportunities) and work ethic (in terms of pulling through on seizing opportunities when they happen), but it feels ultimately somewhat distinct.
    • When I think about EA-aligned ex-coworkers at Google, I'd guess ~all of them are in the top 3% for general ability (and will be in a higher percentile if you use a more favorable metric like programming ability or earning potential). But I'd still guess most of them wouldn't end up doing direct work, for reasons including but not limited to starting new projects etc being kind of annoying.
      • Like I think many of them can do decent work if EA had a good centralized job allocation system and they are allocated to exactly the best direct work fit for them, and a decent subset of them would  actually sacrifice their comfortable BigTech work for something with a clear path to impact, but in practice <<50% of them would actually end up doing direct work that's more useful than donations in the current EA allocation.
  • The current composition of the EA community is incredibly weird, even by rich-country standards, so most of us have a poor sense of how useful our thoughts are to others
    • As a sanity check/Fermi, ~60k (~0.2%) of college-aged Americans attend Ivy League undergrad,  you get ~2x from people attending similar tiered universities (MIT/Stanford/UChicago etc), and ~2-3x from people of similar academic ability who attended non-elite universities, plus a small smattering of people who didn't go to university or dropped out, etc.
    • This totals to ~1% of the general population, and yet is close to the average of the EA composition (?).
    • My guess is that most EAs don't have a strong sense of what the 97th percentile of ability in the population looks like, never mind the 90th.

Reasons why I think the cutoff might in practice be higher:

  • Because EA is drawn from a fairly tail-end of several distributions, we might overestimate population averages?
  • As you've noted, the cutoff for specific professions we recommend seems much higher than top 3% for that profession. For an example of something a bit outside the current zeitgeist, I  think a typical Ivy League English major would not be very competitive for journalism roles (and naively I'd guess journalism to be much more of a comparative advantage for Ivy League English majors than most other roles)
    • Obviously you can be top X% in general and top 0.Y% in specific professions, but I'm not sure there are enough "specific professions" out there where people can have a large enough impact to outweigh earning to give.
    • (Note that I'm not saying that you need to have attended a elite college to do good work. Eg Chris Olah didn't do college, Eliezer Yudkowsky didn't finish high school. But I think when we these sorts of claims, we're saying some people are overlooked/not captured by the existing credentialing systems, and their general ability is on par or higher than the people who are captured by such systems, and ~1% of total population of Ivy League-equivalents is roughly where my Fermi lands)
  • I feel like quite a few talented people well within the top 3% or even top 1% in terms of assessed general ability fail to succeed in doing impactful direct work (either within or outside of the EA community), so the base rates aren't looking super hot?

Reasons why I think the cutoff might in practice be lower:

  • I guess in every "elite" community I'm tangentially a part of or heard of, there's just a very strong incentive to see yourself as much more elite than you actually are, based on insufficient evidence or even evidence to the contrary.
    • So I guess in general we should have a moderate prior that we're BSing ourselves when we think of ourselves (whether EA overall or direct work specifically) as especially elite.
  • Our advice just isn't very optimized for a population of something like "otherwise normal people with a heroic desire to do good." I can imagine lots and lots of opportunities in practice for people who aren't stellar at eg climbing bureaucracies or academic talent, but willing to dedicate their lives to doing good.

On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I'm pretty unsure about this. 

[PR FAQ] Adding profile pictures to the Forum

I expect the costs to not be very high but also the benefits to also not be very high.

[PR FAQ] Sharing readership data with Forum authors

Yeah one solution for this is for the dashboard to only update once a day.

Most research/advocacy charities are not scalable

One point of reference to note is that Bill and Melinda Gates Foundation had about 240 million in "management and general expenses" for about 5 billion in "total program expenses" (which I assume is grants made but I haven't checked). Open Phil is relatively lean right now, but if the EA community hits the point where the community is granting several billion dollars a year, it might make sense for our grantmaking institutions to also be operating at >100M/year scale.

Another point of reference is that a number of US universities each spend >1B/year on research. Now they probably have large sources of inefficency and ways costs can be cut, but otoh probably also have ways that productivity can be increased by spending money.

And in terms of scope, it does naively seem like EA has enough sufficiently important questions that the equivalent of a single top-tier US university should not be quite enough to solve all of them.

So on balance I'm not convinced that EA research is not something that can't use up greater (potentially much greater) than 100M/year, but of course our current research institutions are not quite designed for scaling*

*see scalably using labor

Most research/advocacy charities are not scalable

I agree this is a big issue, and my impression is many grantmakers agree.

Hmm I'd love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream! 

Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's the most relevant benchmark - more of a lower bound.

I wonder if this is better or worse than buying up fractions of AI companies?

In some ways, meta seems more straightforward - the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?

I think I agree, but I'm not confident about this, because this feels maybe too high-level? "1 unit" seems much more heterogeneous and less fungible when the resources we're thinking of is "people" or (worse) "conceptual breakthroughs" (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.

How are resources in EA allocated across issues?

I think dollars are much more fungible than careers, so for most people, you should move your donations away from global health if and only if you believe that marginal donations to other charities are more cost-effective. "Neglectedness" is just a herustic, and not a very strong one.

Most research/advocacy charities are not scalable

Epistemic status: Moderate opinion, held weakly.

I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and  "quality standards/"

In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is  estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).

In longtermism I think the situation is murkier. There's no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what's scaling looks more like "90th percentile climate change intervention" vs "has a plausible shot of being the most important AI alignment intervention." 

In animal welfare it's somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to  existing corporate campaigns is premised on there not being lots of $s that'd flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.

Meta is at least as confused as the object-level charities because you're multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less. 

Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.

I'm inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I'm not sure where the intuition differences lie.

Load More