80Buck1yI’ve recently been thinking about medieval alchemy as a metaphor for longtermist
EA.
I think there’s a sense in which it was an extremely reasonable choice to study
alchemy. The basic hope of alchemy was that by fiddling around in various ways
with substances you had, you’d be able to turn them into other things which had
various helpful properties. It would be a really big deal if humans were able to
do this.
And it seems a priori pretty reasonable to expect that humanity could get way
better at manipulating substances, because there was an established history of
people figuring out ways that you could do useful things by fiddling around with
substances in weird ways, for example metallurgy or glassmaking, and we have
lots of examples of materials having different and useful properties. If you had
been particularly forward thinking, you might even have noted that it seems
plausible that we’ll eventually be able to do the full range of manipulations of
materials that life is able to do.
So I think that alchemists deserve a lot of points for spotting a really big and
important consideration about the future. (I actually have no idea if any
alchemists were thinking about it this way; that’s why I billed this as a
metaphor rather than an analogy.) But they weren’t really very correct about how
anything worked, and so most of their work before 1650 was pretty useless.
It’s interesting to think about whether EA is in a similar spot. I think EA has
done a great job of identifying crucial and underrated considerations about how
to do good and what the future will be like, eg x-risk and AI alignment. But I
think our ideas for acting on these considerations seem much more tenuous. And
it wouldn’t be super shocking to find out that later generations of longtermists
think that our plans and ideas about the world are similarly inaccurate.
So what should you have done if you were an alchemist in the 1500s who agreed
with this argument that you had some really underrated con
1
62Buck1yEdited to add: I think that I phrased this post misleadingly; I meant to
complain mostly about low quality criticism of EA rather than eg criticism of
comments. Sorry to be so unclear. I suspect most commenters misunderstood me.
I think that EAs, especially on the EA Forum, are too welcoming to low quality
criticism [EDIT: of EA]. I feel like an easy way to get lots of upvotes is to
make lots of vague critical comments about how EA isn’t intellectually rigorous
enough, or inclusive enough, or whatever. This makes me feel less enthusiastic
about engaging with the EA Forum, because it makes me feel like everything I’m
saying is being read by a jeering crowd who just want excuses to call me a
moron.
I’m not sure how to have a forum where people will listen to criticism open
mindedly which doesn’t lead to this bias towards low quality criticism.
15
47Lukas_Gloor1y[Takeaways from Covid forecasting on Metaculus]
I’m probably going to win the first round of the Li Wenliang forecasting
tournament
[https://www.metaculus.com/questions/?search=contest:covid-19-forecasting-tournament]
on Metaculus, or maybe get second. (My screen name shows up in second on the
leaderboard, but it’s a glitch that’s not resolved yet because one of the
resolutions depends on a strongly delayed source.) (Update: I won it!)
With around 52 questions, this was the largest forecasting tournament on the
virus. It ran from late February until early June.
I learned a lot during the tournament. Next to claiming credit, I want to share
some observations and takeaways from this forecasting experience, inspired by
Linch Zhang’s forecasting AMA
[https://forum.effectivealtruism.org/posts/83rHdGWy52AJpqtZw/i-m-linch-zhang-an-amateur-covid-19-forecaster-and]
:
* I did well at forecasting, but it came at the expense of other things I
wanted to do. In February, March and April, Covid had completely absorbed me.
I spent several hours per day reading news and had anxiety about regularly
updating my forecasts. This was exhausting; I was relieved when the
tournament came to an end.
* I had previously dabbled in AI forecasting. Unfortunately, I can’t tell if I
excelled at it because the Metaculus domain for it went dormant. In any case,
I noticed that I felt more motivated to delve into Covid questions because
they seemed more connected. It felt like I was not only learning random
information to help me with a single question, but I was acquiring a kind of
expertise. (Armchair epidemiology? :P ) I think this impression was due to a
mixture of perhaps suboptimal question design for the AI Metaculus domain and
the increased difficulty of picking up useful ML intuitions on the go.
* One thing I think I’m good at is identifying reasons why past trends might
change. I’m always curious to understand the underlying reasons behind some
5
43RyanCarey10moTranslating EA into Republican. There are dozens of EAs in US party politics,
Vox, the Obama admin, Google, and Facebook. Hardly in the Republican party,
working for WSJ, appointed for Trump, or working for Palantir. A dozen community
groups in places like NYC, SF, Seattle, Berkeley, Stanford, Harvard, Yale. But
none in Dallas, Phoenix, Miami, the US Naval Laboratory, the Westpoint Military
Academy, etc - the libertarian-leaning GMU economics department being a sole
possible exception.
This is despite the fact that people passing through military academies would be
disproportionately more likely to work on technological dangers in the military
and public service, while the ease of competitiveness is less than more liberal
colleges.
I'm coming to the view that similarly to the serious effort to rework EA ideas
to align with Chinese politics and culture, we need to translate EA into
Republican, and that this should be a multi-year, multi-person project.
8
42Linch1yHere are some things I've learned from spending the better part of the last 6
months either forecasting or thinking about forecasting, with an eye towards
beliefs that I expect to be fairly generalizable to other endeavors.
Note that I assume that anybody reading this already has familiarity with
Phillip Tetlock's work on (super)forecasting, particularly
[https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/#16_Tetlocks_Ten_Commandments_for_Aspiring_Superforecasters]
Tetlock's 10 commandments
[https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/#16_Tetlocks_Ten_Commandments_for_Aspiring_Superforecasters]
for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible. I think there
is a common belief/framing in EA and rationalist circles that coming up with
outside views is easy, and the real difficulty is a) originality in inside
views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing
existing views) but it hides a lot of the details. It's often quite difficult to
come up with and balance good outside views that are applicable to a situation.
See Manheim
[https://www.lesswrong.com/posts/SxpNpaiTnZcyZwBGL/multitudinous-outside-views]
and Muelhauser
[https://www.lesswrong.com/posts/iyRpsScBa6y4rduEt/model-combination-and-adjustment]
for some discussions of this.
2. For novel out-of-distribution situations, "normal" people often trust
centralized data/ontologies more than is warranted. See here
[https://twitter.com/LinchZhang/status/1303120305771040776] for a discussion. I
believe something similar is true for trust of domain experts, though this is
more debatable.
3. The EA community overrates the predictive validity and epistemic superiority
of forecasters/forecasting.
(Note that I think this is an improvement over the status quo in the broad