HIGHLIGHTS
The "most important century" series (Holden
Karnofsky) (summarized by Rohin): In some sense, it is
really weird for us to claim that there is a non-trivial chance
that in the near future, we might build transformative
AI and either (1) go extinct or (2) exceed a growth rate
of (say) 100% per year. It feels like an extraordinary claim, and
thus should require extraordinary evidence. One way of cashing this
out: if the claim were true, this century would be the most
important century, with the most opportunity for individuals to
have an impact. Given the sheer number of centuries there are, this
is an extraordinary claim; it should really have extraordinary
evidence. This series argues that while the claim does seem
extraordinary, all views seem extraordinary --
there isn’t some default baseline view that is “ordinary” to which
we should be assigning most of our probability.
Specifically, consider three possibilities for the long-run
future:
1. Radical: We will have a
productivity explosion by 2100, which will enable us to become
technologically mature. Think of a civilization that sends
spacecraft throughout the galaxy, builds permanent settlements on
other planets, harvests large fractions of the energy output from
stars, etc.
2. Conservative: We get to a
technologically mature civilization, but it takes hundreds or
thousands of years. Let’s say even 100,000 years to be ultra
conservative.
3. Skeptical: We never become
technologically mature, for some reason. Perhaps we run into
fundamental technological limits, or we choose not to expand into
the galaxy, or we’re in a simulation, etc.
It’s pretty clear why the radical view is extraordinary. What
about the other two?
The conservative view implies that we are currently in the most
important 100,000-year period. Given that life is billions of years
old, and would presumably continue for billions of years to come
once we reach a stable galaxy-wide civilization, that would make
this the most important 100,000 year period out of tens of
thousands of such periods. Thus the conservative view is also
extraordinary, for the same reason that the radical view is
extraordinary (albeit it is perhaps only half as extraordinary as
the radical view).
The skeptical view by itself does not seem obviously
extraordinary. However, while you could assign 70% probability to
the skeptical view, it seems unreasonable to assign 99% probability
to such a view -- that suggests some very strong or confident
claims about what prevents us from colonizing the galaxy, that we
probably shouldn’t have given our current knowledge. So, we need to
have a non-trivial chunk of probability on the other views, which
still opens us up to critique of having extraordinary claims.
Okay, so we’ve established that we should at least be willing to
say something as extreme as “there’s a non-trivial chance we’re in
the most important 100,000-year period”. Can we tighten the
argument, to talk about the most important century?
In fact, we can, by looking at the economic growth rate.
You are probably aware that the US economy grows around 2-3% per
year (after adjusting for inflation), so a business-as-usual,
non-crazy, default view might be to expect this to continue. You
are probably also aware that exponential growth can
grow very quickly. At the lower end of 2% per
year, the economy would double every ~35 years. If this continued
for 8200 years, we'd need to be sustaining multiple
economies as big as today's entire world economy per atom
in the universe. While this is not a priori
impossible, it seems quite unlikely to happen. This suggests that
we’re in one of fewer than 82 centuries that will have growth rates
at 2% or larger, making it far less “extraordinary” to claim that
we’re in the most important one, especially if you believe that
growth rates are well correlated with change and ability to have
impact.
The actual radical view that the author places non-trivial
probability on is one we’ve seen before in this newsletter: it is
one in which there is automation of science and technology through
advanced AI or whole brain emulations or other possibilities. This
allows technology to substitute for human labor in the economy,
which produces a positive feedback loop as the output of the
economy is ploughed back into the economy creating superexponential
growth and a “productivity explosion”, where the growth rate
increases far beyond 2%. The series has
summarizes and connects together many (AN
#105), past (AN
#154), Open (AN
#121), Phil (AN
#118) analyses (AN
#145), which I won't be summarizing here (since we've
summarized these analyses previously). While this is a more
specific and “extraordinary” claim than even the claim that we live
in the most important century, it seems like it should not be seen
as so extraordinary given the arguments above.
This series also argues for a few other points important to
longtermism, which I’ll copy here:
1. The long-run future is radically
unfamiliar. Enough advances in technology could lead
to a long-lasting, galaxy-wide civilization that could be a radical
utopia, dystopia, or anything in between.
2. The long-run future could come much faster than
we think, due to a possible AI-driven productivity
explosion. (I briefly mentioned this above, but the full series
devotes much more space and many more arguments to this point.)
3. We, the people living in this century, have the chance to
have a huge impact on huge numbers of people to come - if we can
make sense of the situation enough to find helpful actions. But
right now, we aren't ready for this.
Read more: 80,000
Hours podcast on the topic
|