I am an academic economist and a 'Distinguished Researcher' at Rethink Priorities (https://www.rethinkpriorities.org/our-team)
My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.
I'm working to impact EA fundraising and marketing; see https://daaronr.github.io/ea_giving_barriers/index.html, innovationsinfundraising.org, and giveifyouwin.org.
Twitter: @givingtools
My impression is that EA orgs are far more mission-aligned and have more scope for cooperation than typical nonprofits and charities. Those guys tend to compete with each other and are very concerned with self-preservation.
I agree. I was having some similar ideas. I'm particularly thinking about data surrounding effective giving choices and attitudes towards 'EA issues'.
Was thinking some tagging of EA-relevant data on kaggle.com, but the Github repo idea seems gret
Airtable 'sign up form'... here
If you want to be a reader/editor/commenter/organizer,
please sign up and or/contact me or @dothemath
EA Forum podcast” collab seems to be catching on. If you want to be a reader/editor/commenter/organizer, please sign up here and or/contact me or @dothemath
Fwiw my podcast with more recordings is HERE. @dothemath and I are in contact and we will probably merge the organization of our content at some point.
I'm also planning to make an airtable (database) to keep track of this and for people to sign up to do readings.
(Fwiw my podcast with recordings is HERE). @dothemath and I are in contact and we will probably merge the organization of our content at some point.
I'm also planning to make an airtable (database) to keep track of this and for people to sign up to do readings.
So far the episodes I recorded have about 15-20 listens each (although not sure how many listened to the 'whole thing' vs a curious snip).
Seems decent as I haven't promoted it much yet. Probably worth continuing but not yet worth investing a lot in production value... until the listenership goes say, over 100 per episode.
and the audio versions averaged 6% of the number of downloads of the short versions.
What are the base rate numbers here?
Thank you!
Other than the one from David Bernard and Matthias Endres, do you know if any of these are Economics-focused and/or involve formal maths and/or quantitative content?
Variant of Chinese room argument? This seems ironclad to me, what am I missing:
My claims:
Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?
Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant
Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.
Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.
We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.
Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.
A colleague responded:
OK but this seems only if we:
But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?
Please tell me why I'm wrong.