I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.
In the past, I've studied Maths and Philosophy, dropped out in exasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on nunosempere.github.io.
With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.
I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.
To resolve that prediction, yes, I would say that that interpretation is correct.
CAUMFs might act as a fairly small layer on top of other services
If this was as easy as downloading a package over npm, it seems like an obviously good idea. But overall my impression is that this would have way too much overhead in terms of legal headache and coordination required might be pretty great.
Could we do this as a DAO
The thing this might be pointing at is that writing smarts contracts to manipulate money right now seems more convenient on a blockchain than ~anywhere else. Like, I'm sure that stripe has some API, but implementing something like a dominant assurance contract with normal money API would be arduous, but doable in some blockchain. It could even be the Binance blockchain (which is not decentralized), for all I care.
Overall my impression is that this might be an idea to keep in mind, but that the infrastructure is just not there yet.
My hot take: This seems like a somewhat big deal to me. It's what I would have predicted, but that's scary, given my timelines
Might be confirmation bias. But is it.
But if you already have this coalition value function, you've already solved the coordination problem and there’s no reason to actually calculate the Shapley value! If you know how much total value would be produced if everyone worked together, in realistic situations you must also know an optimal allocation of everyone’s effort. And so everyone can just do what that optimal allocation recommended.
This seems correct
A related claim is that the Shapley value is no better than any other solution to the bargaining problem. For example, instead of allocating credit according to the Shapley value, we could allocate credit according to the rule “we give everyone just barely enough credit that it’s worth it for them to participate in the globally optimal plan instead of doing something worse, and then all the leftover credit gets allocated to Buck”, and this would always produce the same real-life decisions as the Shapley value.
This misses some considerations around cost-efficiency/prioritization. If you look at your distorted "Buck values", you come away that Buck is super cost-effective; responsible for a large fraction of the optimal plan using just one salary. If we didn't have a mechanistic understanding of why that was, trying to get more Buck would become an EA cause area.
In contrast, if credit was allocated according to Shapley values, we could look at the groups whose Shapley value is the highest, and try to see if they can be scaled.
The section about "purely local" Shapley values might be pointing to something, but I don't quite know what it is, because the example is just Shapley values but missing a term? I don't know. You also say "by symmetry...", and then break that symmetry by saying that one of the parts would have been able to create $6,000 in value and the other $0. Needs a crisper example.
Re: coordination between people who have different values using SVs, I have some stuff here, but looking back the writting seems too corny.
Lastly, to some extent, Shapley values are a reaction to people calculating their impact as their counterfactual impact. This leads to double/triple counting impact for some organizations/opportunities, but not others, which makes comparison between them more tricky. Shapley values solve that by allocating impact such that it sums to the total impact & other nice properties. Then someone like OpenPhilanthropy or some EA fund can come and see which groups have the highest Shapley value (perhaps highest Shapley value per unit of money/ressources) and then try to replicate them/scale them. People might also make better decisions if they compare Shapley instead of counterfactual values (because Shapley values mostly give a more accurate impression of the impact of a position.)
So I see the benefits of Shapley values as fixing some common mistakes arising from using counterfactual values. This would make impact accounting slightly better, and coordination slightly better to the extent it relies on impact accounting for prioritization (which tbh might not be much.)
I'm not sure to what extent I agree with the claim that people are overhyping/misunderstanding Shapley values. It seems a plausible.
I think that some of your anti-expected-value beef can be addressed by considering stochastic dominance as a backup decision theory in cases where expected value fails.
For instance, maybe I think that a donation to ALLFED in expectation leads to more lives saved than a donation to a GiveWell charity. But you could point out that the expected value is undefined, because maybe the future contains infinite amount of both flourishing and suffering. Then donating to ALLFED can still be the superior option if I think that it's stochastically dominant.
There are probably also tweaks to make to stochastic dominance, e.g., if you have two "games",
then one could also have a principle where Game 1 is preferable to Game 2 if X > Y, and this also sidesteps some more expected value problems.
Notes on: A Sequence Against Strong Longtermism
Summary for myself. Note: Pretty stream-of-thought.
Proving too much
Overall: The core of this section seems to be that expected values are sometimes undefined. I agree, but this doesn't deter me from trying to do the most good by seeking more speculative/longtermist interventions. I can use stochastic dominance when expected utility fails me.
The post also takes issue with the following paragraph from The Case For Strong Longtermism:
Then, using our figure of one quadrillion lives, the expected good done by Shivani contributing $10,000 to [preventing world domination by a repressive global political regime] would, by the lights of utilitarian axiology, be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500. (Nuño: italics and bold from the OP, not from original article)
I agree that the paragraph just intuitively looks pretty bad, so I looked at the context:
Now, the argument we are making is ultimately a quantitative one: that the expected impact
one can have on the long-run future is greater than the expected impact one can have on the
short run. It’s not true, in general, that options that involve low probabilities of high stakes
systematically lead to greater expected values than options that involve high probabilities of
modest payoffs: everything depends on the numbers. (For instance, not all insurance contracts
are worth buying.) So merely pointing out that one might be able to influence the long run, or
that one can do so to a nonzero extent (in expectation), isn’t enough for our argument. But,
we will claim, any reasonable set of credences would allow that for at least one of these
pathways, the expected impact is greater for the long-run.
Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world
government in the next century, and that $1 billion of well-targeted grants — aimed (say) at
decreasing the chance of great power war, and improving the state of knowledge on optimal
institutional design — would increase the well-being in an average future life, under the
world government, by 0.1%, with a 0.1% chance of that effect lasting until the end of
civilisation, and that the impact of grants in this area is approximately linear with respect to
the amount of spending. Then, using our figure of one quadrillion lives to come, the expected
good done by Shivani contributing $10,000 to this goal would, by the lights of a utilitarian
axiology, be 100 lives. In contrast, funding for Against Malaria Foundation, often regarded
as the most cost-effective intervention in the area of short-term global health improvements,
on average saves one life per $3500
Yeah, this is in the context of a thought experiment. I'd still do this with distributions rather than with point estimates, but ok.
The Credence Assumption
The Poverty of Longtermism
(Edited to add Centre for the Study of Existential Risk Four Month Report June - September 2020 to the CSER sources)
Here is an update post on Metaforecast.