27RyanCarey3moEA Highschool Outreach Org (see Catherine's
[https://forum.effectivealtruism.org/posts/L5t3EPnWSj7D3DpGt/high-school-ea-outreach]
and Buck's posts
[https://forum.effectivealtruism.org/posts/HcaB2kJKhxJtS4oGc/some-thoughts-on-ea-outreach-to-high-schoolers]
, my comment on EA teachers
[https://forum.effectivealtruism.org/posts/HcaB2kJKhxJtS4oGc/some-thoughts-on-ea-outreach-to-high-schoolers?commentId=WhPpB6ZcohbmEtDJq]
)
Running a literal school would be awesome, but seems too consuming of time and
organisational resources to do right now.Assuming we did want to do that
eventually, what could be a suitable smaller step? Founding an organisation with
vetted staff, working full-time on promoting analytical and altruistic thinking
to high-schoolers - professionalising in this way increases the safety and
reputability of these programs. Its activities should be targeted to top
schools, and could include, in increasing order of duration:
1. One-off outreach talks at top schools
2. Summer programs in more countries, and in more subjects, and with more of an
altruistic bent (i.e. variations on SPARC and Eurosparc)
3. Recurring classes in things like philosophy, econ, and EA. Teaching by
visitors could be arranged by liaising to school teachers, similarly to how
external teachers are brought in for chess classes.
4. After-school, or weekend, programs for interested students
I'm not confident this would go well, given the various reports from Catherine's
recap and Buck's further theorising. But targeting the right students, and
bringing the right speakers, gives it a chance of success. If you get to (3-4),
all is going well, and the number of interested teachers and students are
rising, it would be very natural for the org to scale into a school proper.
16Buck3mo[This is an excerpt from a longer post I'm writing]
Suppose someone’s utility function is
U = f(C) + D
Where U is what they’re optimizing, C is their personal consumption, f is their
selfish welfare as a function of consumption (log is a classic choice for f),
and D is their amount of donations.
Suppose that they have diminishing utility wrt (“with respect to”) consumption
(that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility
wrt donations is a constant, and their marginal utility wrt consumption is a
decreasing function. There has to be some level of consumption where they are
indifferent between donating a marginal dollar and consuming it. Below this
level of consumption, they’ll prefer consuming dollars to donating them, and so
they will always consume them. And above it, they’ll prefer donating dollars to
consuming them, and so will always donate them. And this is why the GWWC pledge
asks you to input the C such that dF(C)/d(C) is 1, and you pledge to donate
everything above it and nothing below it.
This is clearly not what happens. Why? I can think of a few reasons.
* The above is what you get if the selfish and altruistic parts of you
“negotiate” once, before you find out how high your salary is going to be. If
instead, you negotiate every year to spend some fair share of your resources
on altruistic and selfish resources, you get something like what we see.
* People aren’t scope sensitive about donations, and so donations also have
diminishing marginal returns (because small ones are disproportionately good
at making people think you’re good).
* When you’re already donating a lot, other EAs will be less likely to hold
consumption against you (perhaps because they want to incentivize rich and
altruistic people to hang out in EA without feeling judged for only donating
90% of their $10M annual expenditure or whatever).
* When you’re high income, expensive time-money tradeoffs like business class
fl
2
15nora3moBelow, I briefly discuss some motivating reasons, as I see them, to foster more
interdisciplinary thought in EA. This includes ways EA's current set of research
topics might have emerged for suboptimal reasons.
MORE EA-RELEVANT INTERDISCIPLINARY RESEARCH : WHY?
The ocean of knowledge is vast. But the knowledge commonly referenced within EA
and longtermism represents only a tiny fraction of this ocean.
I argue that EA's knowledge tradition is skewed for reasons including but
not-limited-to the epistemic merit of those bodies of knowledge. There are good
reasons for EA to focus in certain areas:
* Direct relevance (e.g. if you're trying to do good, it seems clearly relevant
to look into philosophy a bunch; if you're trying to do good effectively, it
seems clearly relevant to look into economics (among others) a bunch; if you
came to think that existential risks are a big deal, it is clearly relevant
to look into bioengineering, international relations, etc. a bunch; etc.)
* Evidence of epistemic merit (e.g. physics has more evidence for epistemic
merit than psychology, which in return has more evidence for epistemic merit
than astrology; in other words, beliefs gathered from different fields are
are likely to pay more/less rent
[https://www.lesswrong.com/tag/making-beliefs-pay-rent], or are likely to be
more/less explanatory virtuous [https://arxiv.org/abs/2006.02359])
However, some of the reasons we’ve ended up with our current foci may not be as
good:
* Founder effects [https://en.wikipedia.org/wiki/Founder_effect]
* The, in parts arbitrary, way academic disciplines have been carved up
* Inferential distances between knowledge traditions that hamper the free
diffusion of knowledge between disciplines and schools of thought
Having a skewed knowledge basis is problematic. There is a significant
likelihood that we are missing out on insights or perspectives that might
critically advance our undertaking. We don’t know what we d
1
14RyanCarey3moMaking community-building grants more attractive
An organiser from Stanford EA asked me today how community building grants could
be made more attractive. I have two reactions:
1. Specialised career pathways. To the extent that this can be done without
compromising effectiveness, community-builders should be allowed to build
field-specialisations, rather than just geographic ones. Currently,
community-builders might hope to work at general outreach orgs like CEA and
80k. But general orgs will only offer so many jobs. Casting the net a bit
wider, many activities of Forethought Foundation, SERI, LPP, and FLI are
field-specific outreach. If community-builders take on some semi-specialised
kinds of work in AI, or policy, or econ, (in connection with these orgs or
independently) then this would aid their prospects of working for such orgs
or returning to a more mainstream pathway.
2. "Owning it". To the extent that community building does not offer a
specialised career pathway, the fact that it's a bold move should be
incorporated into the branding. The Thiel Fellowship offers $100k to ~2
dozen students per year, to drop out of their programs to work on a startup
that might change the world. Not everyone will like it, but it's bold, it's
a round, and reasonably-sized number, with a name attached, and a dedicated
website. Imagine a "Macaskill fellowship" that offers $100k for a student
from a top university to pause their studies and spend one year focusing on
promoting prioritisation and long-term thinking - it'd be a more attractive
path.
14nora3moThe below provides definitions and explanations of "domain scanning" and
"epistemic translation", in an attempt of adding further gears to how
interdisciplinary research works.
DOMAIN SCANNING AND EPISTEMIC TRANSLATION
I suggest understanding domain scanning and epistemic translation as a specific
type of research that both plays (or ought to play) an important role as part of
a larger research progress, or can be usefully pursued as “its own thing”.
DOMAIN SCANNING
By domain scanning, I mean the activity of searching through diverse bodies and
traditions of knowledge with the goal of identifying insights, ontologies or
methods relevant to another body of knowledge or to a research question (e.g. AI
alignment, Longtermism, EA).
I call source domains those bodies of knowledge where insights are being drawn
from. The body of knowledge that we are trying to inform through this approach
is called the target domain. A target domain can be as broad as an entire field
or subfield or a specific research problem (in which case I often use the term
target problem instead of target domain).
Domain scanning isn’t about comprehensively surveying the entire ocean of
knowledge, but instead about selectively scouting for “bright spots” - domains
that might importantly inform the target domain or problem.
An important rationale for domain scanning is the belief that model selection is
a critical part of the research process. By model selection, I mean the way we
choose to conceptualize a problem at a high-level of abstraction (as opposed to,
say, working out the details given a certain model choice). In practice,
however, this step often doesn’t happen at all because most research happens
within a paradigm that is already “in the water”.
As an example, say an economist wants to think about a research question related
to economic growth. They will think about how to model economic growth and will
make choices according to the shape of their research problem. They might fo