I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches).
So just wondering whether we might underestimate the cost of development/use - despite from gut feeling strongly agreeing that it would seem like such a tractable problem.
I find it a GREAT idea (have not tested it yet)!
Thank you! I was actually always surprised by H's mention of the taxation case as an example where maximin would be (readily) applicable.
IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.
On the other hand, if you asked me whether I'd be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin - es, I'd possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that's part of the point; in this case, fair enough!
I find this a rather challenging post, even if I like the high-level topic a lot! I didn't read the entire linked paper, but I'd be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):
The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken seriously, can trivially be seen to lead to all sorts of preposterous choices that are quite miraculously improved by adding a smaller or larger portion of utilitarianism (one does by no means need to be a full utilitarian to agree with this), end of story.
Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view ("Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the NIH and Anxiety and Depression Association of America."), plus maybe that the Anxiety America guys like to quote impressive numbers for their domain. => Could be useful if you found more tangible ways to express what's going on anxiety-wise in how many heads.
I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular - or yet to be defined - causes later on. This can make sense also if one believes one's future view on what to donate towards being significantly more enlightened.
I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA, might at times be quite self-indulgent in some ways.
Still, I guess it would be possible to invest one's money into an EA-aligned fund that would later be able to disburse money only to aligned causes, incl. possibly your own project. That could provide some value in some situations.
Maybe it'd be easier, and worthwhile, to simply have an organization collecting pledges (and accompany one's verification of it) to not spend more than X/year; I think there might be a bunch of people interested in it.
Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,
Some people seem to achieve orders of magnitudes more than others in the same job.
suggest the work focuses essentially on people's performance, but already in the motivational examples
For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it's their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group.
(emphasis and [] added by me)
I think I have not explicitly seen discussed whether at all it is the people, or more the exact project (the startup, the book(s)) they work on, that is the successful element, although the outcome is a sort of product of the two. Theoretically, in one (obviously wrong) extreme case: Maybe all Y-Combinator CEOs were similarly performing persons, but some of the startups simply are the right projects!
My gut feeling is that making this fundamental distinction explicit would make the discussion/analysis of performance more tractable.
Addendum:
Of course, you can say, book writers and scientists, startuppers, choose each time anew what next book and paper to write, etc., and this choice is part of their 'performance', so looking at their output's performance is all there is. But this would be at max half-true in a more general sense of comparing the general capabilities of the persons, as there are very many drivers that lead persons to very specific high-level domains (of business, of book genres, etc.) and/or of very specific niches therein, and these may have at least as much to do with personal interest, haphazard personal history, etc.
Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.
Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.
Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).
Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.
I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970's Club of Rome - Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.
Economic growth = increase in market value, is a typical definition.
Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely great ways without downsides. Or maybe indeed if we can duplicate/simulate brains which derive lot of value, say, literally out of thin air - and if we decide to take into account their blissful state also in our growth measure.
If we all have our basic needs met, and are rich way beyond it, willingness to pay for some new services may become extremely huge; even for the least important services - merely as we have nothing to do with our wealth, and as we're willing to pay so little on the margin for the traditional 'basic' goods who are (in my scenario assumed to be) abundant and cheaply produced.
So the quantitative long-run extent of "economic growth" then becomes a bit an arbitrary thing: economic growth potentially being huge, but the true extra value possibly being limited.
'Economic growth' may therefore be too intangible, too arbitrary a basis for discussing the nature of long-run fate of human (or what ever supersedes us) development.
Maybe we should revert back to directly discussing limits to increase in utility (as come comments here already do).