I think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.
I'm not sure I'm fully following, but I think the "almost exactly the same time" point is key (and I was getting at something similar with "However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy"). The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.
Working on that!
If we have advanced AI that is capable of constructing a digital human simulation, wouldn't it also by proxy be advanced enough to be conscious on its own, without the need for anything approximating human beings? I can imagine humans wanting to create copies of themselves for various purposes but isn't it much more likely for completely artificial silicon-first entities to take over the galaxy? Those entities wouldn't have the need for any human pleasures and could thus conquer the universe much more efficiently than any "digital humans" ever could.
It ... (read more)
Thanks, I agree it's not ideal, but haven't found a way to change the color of that button between light and dark mode.
No need to follow any unusual commenting norms! The "cold" nature of the blog is due to my style and schedule, not a request for others.
I'm not sure I follow this. I think if there were extraterrestrials who were going to stop us from spreading, we'd likely see signs of them (e.g., mining the stars for energy, setting up settlements), regardless of what speed they traveled while moving between stars.
I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.
I think it's wild if we're living in the century (or even the 100,000 years) that will produce a misaligned AI whose values come to fill the galaxy for billions of years. That would just be quite a remarkable, high-leverage (due to the opportunity to avoid misalignment, or at least have some impact on what the values end up being) time period to be living in.
I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here."
One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wi... (read more)
Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.
I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.
(Response to both AppliedDivinityStudies and branperr)
My aim was to argue that a particular extreme sort of duplication technology would have extreme consequences, which is important because I think technologies that are "extreme" in the relevant way could be developed this century. I don't think the arguments in this piece point to any particular conclusions about biological cloning (which is not "instant"), natalism, etc., which have less extreme consequences.
It seems very non-obvious to me whether we should think bad outcomes are more likely than good ones. You asked about arguments for why things might go well; a couple that occur to me are (a) as long as large numbers of digital people are committed to protecting human rights and other important values, it seems like there is a good chance they will broadly succeed (even if they don't manage to stop every case of abuse); (b) increased wealth and improved social science might cause human rights and other important values to be prioritized more highly, and might help people coordinate more effectively.
I broadly agree with this. The point of my post was to convey intuitions for why "a world of [digital people] will be so different from modern nations states just as modern states are from chimps," not to claim that the long-run future will be just as described in Age of Em. I do think despite the likely radical unfamiliarity of such a world, there are properties we can say today it's pretty likely to have, such as the potential for lock-in and space colonization.
Thanks for the thoughtful comments, Linch.
Response on point 1: I didn't mean to send a message that one should amass the most impressive conventional credentials possible in general - only that for many of these aptitudes, conventional success is an important early sign of fit and potential.
I'm generally pretty skeptical by default of advanced degrees unless one has high confidence that one wants to be on a track where the degree is necessary (I briefly give reasons for this skepticism in the "political and bureaucratic aptitudes" section). This piece only... (read more)
I like this; I agree with most of what you say about this kind of work.
I've tried to mostly list aptitudes that one can try out early on, stick with if they're going well, and pretty reliably build careers (though not necessarily direct-work longtermist careers) around. I think the aptitude you're describing here might be more of later-career/"secondary" aptitude that often develops as someone moves up along an "organization building/running/boosting" or "political/bureaucratic" track. But I agree it seems like a cluster of skills that can be intentionally developed to some degree and used in a lot of different contexts.
Thanks for the thoughtful comments!
On your first point: the reason I chose to emphasize longtermism is because:
I think a year of full-time work is likely enough to see the sort of "signs of life" I alluded to, but it could take much longer to fulfill one's potential. I'd generally expect a lot of people in this category to see steady progress over time on things like (a) how open-ended and poorly-scoped of a question they can tackle, which in turn affects how important a question they can tackle; (b) how efficiently and thoroughly they can reach a good answer; (c) how well they can communicate their insights; (d) whether they can hire and train other people to do c... (read more)
This general idea seems pretty promising to me.
I didn't mean to express a view one way or the other on particular current giving opportunities; I was instead looking for something a bit more general and timeless to say on this point, since especially in longtermism, giving opportunities can sometimes look very appealing at one moment and much less so at another (partly due to room-for-more-funding considerations). I think it's useful for you to have noted these points, though.
This is still a common practice. The point of it isn't to evaluate employees by # of hours worked; the point is for their manager to have a good understanding of how time is being used, so they can make suggestions about what to go deeper on, what to skip, how to reprioritize tasks, etc.
Several employees simply opt out from this because they prefer not to do it. It's an optional practice for the benefit of employees rather than a required practice used for performance assessment.
I'm referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Allocating_capital_to_buckets_and_causes , which may have different consequences for how much ought to be allocated to each bucket)
Keep in mind that Milan worked for GiveWell, not OP, and that he was giving his own impressions rather than speaking for either organization in that post.
That said:
*His "Flexible working schedule" point sounds pretty consistent with how things are here.
*We continue to encourage time tracking (but we don't require it and not everybody does it).
*We do try to explicitly encourage self-care.
Does that respond to what you had in mind?
GiveWell's CEA was produced by multiple people over multiple years - we wouldn't expect a single person to generate the whole thing :)
I do think you should probably be able to imagine yourself engaging in a discussion over some particular parameter or aspect of GiveWell's CEA, and trying to improve that parameter or aspect to better capture what we care about (good accomplished per dollar). Quantitative aptitude is not a hard requirement for this position (there are some ways the role could evolve that would not require it), but it's a major plus.
The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.
In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I ex... (read more)
We do formal performance reviews twice per year, and we ask managers to use their regular (~weekly) checkins with reports to sync up on performance such that nothing in these reviews should be surprising. There's no unified metric for an employee's output here; we set priorities for the organization, set assignments that serve these priorities, set case-by-case timelines and goals for the assignments (in collaboration with the people who will be working on them), and compare output to the goals we had set.
All bios here: https://www.openphilanthropy.org/about/team
Grants Associates and Operations Associates are likely to report to Derek or Morgan. Research Analysts are likely to report to people who have been in similar roles for a while, such as Ajeya, Claire, Luke and Nick. None of this is set in stone though.
A few things that come to mind:
The work is challenging, and not everyone is able to perform at a high enough level to see the career progression they want.
The culture tends toward direct communication. People are expected to be open with criticism, both of people they manage and of people who manage them. This can be uncomfortable for some people (though we try hard to create a supportive and constructive context).
The work is often solitary, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It's pos
We don't control the visa process and can't ensure that people will get sponsorship. We don't expect sponsorship requirements to be a major factor for us in deciding which applicants to move forward with.
There will probably be similar roles in the future, though I can't guarantee that. To become a better candidate, one can accomplish objectively impressive things (especially if they're relevant to effective altruism); create public content that gives a sense for how they think (e.g., a blog); or get to know people in the effective altruism community to increase the odds that one gets a positive & meaningful referral.
Most of the roles here involve a lot of independent work, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It’s possible that this will change for some roles in the future (e.g. it’s possible that we’ll want more large-group collaboration as our cause prioritization team grows), but we’re not sure of that. I think you should probably be prepared for a fair amount of work along the lines of what I've described here.
They're different organizations and I don't know nearly as much about the GiveWell role. One big difference is the causes we work on.
If you're interested in both, I'd recommend applying to both, and if you are offered both roles, there will be lots of opportunities to learn more about each at that point in order to inform the decision.
I answered a similar question here: http://effective-altruism.com/ea/1mf/hi_im_holden_karnofsky_ama_about_jobs_at_open/dpl
In general, people who have been in the Research Analyst role for a while will be the managers and primary mentors of new Research Analysts. There will be regular (~weekly) scheduled checkins as well as informal interaction as needed (e.g., over Slack).
There's no hard line between training and "just doing the work" - every assignment should have some direct value and some training value. We expect to lean pretty hard toward t... (read more)
Yes, I mean statutory holidays like Thanksgiving.
We're flexible. People don't clock in or out; we evaluate performance based on how much people get done on a timescale of months. We encourage people to work hard but also prioritize work-life balance. The right balance varies by the individual.
Most people here work more than one would in a traditional 9-5 job. (A common figure is 35-40 "focused" hours per week.) I think that reflects that they're passionate about their work rather than that they feel pressure from management to work a lot. We regularly check in with people about work-life balance and encourage them to work less if it seems this would be good for their happiness.
We're in the process of reviewing our policies, but we're likely to settle on something like 25 paid days off (including sick days), 10 holiday days (with the option to work on holidays and use the paid time off elsewhere), several months of paid parental leave, and a flexible unpaid leave policy for people who want to take more time off. We are also flexible with respect to working from home.
Perhaps other staff will chime in here, but my take: our pay is competitive and takes cost of living into account, and we are near public transportation, so I don't think the rents or commutes are a major issue. As a former NYC resident, I think the Bay Area is a great place to live (weather, food, etc.) and has a very strong effective altruist community. I don't see a lot of drawbacks to living here if you can make it work.
Hm, I'm not sure why our form asks for more detail on undergrad relative to grad - we copied the form from GiveWell and may not have thought about it. It's possible this is because the form was being used in an earlier GiveWell search where few applicants had been to grad schools. I'll ask around about this.
Broadly speaking, we're going to try to give people assignments that are relevant to our work and that we think include a lot of the core needed skills - things like evaluating a potential grant (or renewal) and writing up the case for or against. We'll evaluate these assignments, give substantial feedback, and iterate so that people improve. We'll also be providing resources for gaining background knowledge, such as "flex time," recommended reading lists and optional Q&A sessions. We've seen people improve a lot in the past and become core contributors, and think this basic approach is likely to lead to more of that.
I would rate those about equally, though I'd add that GiveWell would prefer not to hire people whose main goal is to go to OP.
We currently have a happy hour every 3 weeks and host group activities as well, including occasional parties and a multiple-day staff retreat this year. We want to make it easy for staff to socialize and be friends, without making it a requirement or an overly hard nudge (if people would rather stick to their work, that's fine by us).
We could certainly imagine ramping up grantmaking without a much better answer. As an institution we're often happy to go with a "hacky" approach that is suboptimal, but captures most of the value available under multiple different assumptions.
If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we'll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don't put much more time into researching or reflecting on moral uncertainty.
Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.
All else equal, we consider applicants stronger when they have degrees in challenging fields from strong institutions. It’s not the only thing we’re looking at, even at that early stage. And the early stage is for filtering; ultimately, things like work trial assignments will be far more important to hiring decisions.
This varies by the individual. We have some Research Analysts who are always working on a variety of things, and some who have become quite specialized. It varies largely by the interests/preferences of the employee.
We're certainly not using the same standards as academia! In general, we aim to base assignments on a combination of 1. How we judge what's most important to do (in terms of accomplishing as much good as possible) 2. What the employees themselves are motivated and interested to work on (including their own judgments of how to do as much good as possible).
I'd recommend that recent grads looking to help with AI governance and policy apply for the Research Analyst position. With Research Analysts, we'll first focus on mentorship & training, then try to figure out where everyone can do the most good based on their interests and skills. Someone who has a high aptitude & interest for AI strategy would likely end up putting substantial time into that within a year or so (maybe less).
You can also check out roles at the Future of Humanity Institute.
I super highly recommend reading this report. In full, including many of the appendices (and footnotes :) )
I thought it was really interesting, and helpful for thinking this question through and understanding the state of what evidence and arguments are out there (unfortunately there is much less to go on than I’d even expected, though).
I was the most proximate audience for the report, so discount my recommendation as much as feels appropriate with that in mind.
The principles were meant as descriptions, not prescriptions.
I'm quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: "I think that one of the best ways to learn is to share one's impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things." But because the risks are what they are, I've concluded that public discourse is currently the wrong venue fo... (read more)
Michael, this post wasn't arguing that there are no benefits to public discourse; it's describing how my model has changed. I think the causal chain you describe is possible and has played out that way in some cases, but it seems to call for "sharing enough thinking to get potentially helpful people interested" rather than for "sharing thinking and addressing criticisms comprehensively (or anything close to it)."
The EA Forum counts for me as public discourse, and I see it as being useful in some ways, along the lines described in the post.
Thanks for all the thoughts on this point! I don't think the comparison to currency is fair (the size of today's economy is a real quantity, not a nominal one), but I agree with William Kiely that the "several economies per atom" point is best understood as an intuition pump rather than an airtight argument. I'm going to put a little thought into whether there might be other ways of communicating how astronomically huge some of these numbers are, and how odd it would be to expect 2% annual growth to take us there and beyond.
One thought: it is possible that... (read more)