Based on vaguely remembered hearsay, my heuristic has been that the large AI labs like DeepMind and OpenAI spend roughly as much on compute as they do on people, which would make for a ~2x increase in costs. Googling around doesn't immediately get me any great sources, although this page says "Cloud computing services are a major cost for OpenAI, which spent $7.9 million on cloud computing in the 2017 tax year, or about a quarter of its total functional expenses for that year".
I'd be curious to get a better estimate, if anyone knows anything relevant.
There may be reasons why building such 100m+ projects are different both from many smaller "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.
One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved
This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they're currently spending like $10m-$20m / year.
this page has some statistics on openphil's giving (though it is noted to be preliminary) https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy
Sweden has a “Ministry of the Future,”
Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )
The last two should be 10^11 - 10^12 and 10^11, respectively?
This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be
Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.
It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.
Agreed.
There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).
Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.
And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.
Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism
I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.
As a toy example, say that is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging
This seems right to me.
I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.
Yeah, I have no quibbles with this. FWIW, I personally didn't interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.
(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)
With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource - opportunitycost/resource) * resource.
If impact/resource is much bigger than opportunitycost/resource (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.
If so, assuming that resource=$ in this case, this roughly translates to the heuristic "if the opportunity cost of money isn't that high (compared to your project), you should optimise for total impact without thinking much about the monetary costs".