ETHICS
& POLICY
- The
Fable of the Dragon-Tyrant
- Recounts
the Tale of a most vicious Dragon that ate thousands of people every day,
and of the actions that the King, the People, and an assembly of Dragonologists
took with respect thereto. [J Med Ethics, Vol. 31, No. 5 (2005):
273-277] [translations: German, Hebrew,
Finnish,
Spanish,
French,
Italian, Slovenian, Dutch,
Russian] [html] [pdf] [mp3]
- The
Reversal Test: Eliminating Status Quo Bias in Applied Ethics
- We present a heuristic for correcting
for one kind of bias (status quo bias), which we suggest affects many
of our judgments about the consequences of modifying human nature. We
apply this heuristic to the case of cognitive enhancements, and argue
that the consequentialist case for this is much stronger than commonly
recognized. (w/ Toby Ord) [Ethics, Vol. 116, No. 4 (2006): 656-680] [pdf]
Astronomical
Waste: The Opportunity Cost of Delayed Technological Development
Suns are illuminating and heating
empty rooms, unused energy is being flushed down black holes, and our
great common endowment of negentropy is being irreversibly degraded into
entropy on a cosmic scale. These are resources that an advanced civilization
could have used to create value-structures, such as sentient beings living
worthwhile lives...
[Utilitas, Vol. 15, No. 3 (2003): 308-314] [translation: Russian] [html] [pdf]
- Infinite
Ethics
- Cosmology shows that we might
well be living in an infinite universe that contains infinitely many
happy and sad people. Given some assumptions, aggregative ethics implies
that such a world contains an infinite amount of positive value and
an infinite amount of negative value. But you can presumably do only
a finite amount of good or bad. Since an infinite cardinal quantity
is unchanged by the addition or subtraction of a finite quantity, it
looks as though you can't change the value of the world. Aggregative
consequentialism (and many other important ethical theories) are threatened
by total paralysis. We explore a variety of potential cures, and discover
that none works perfectly and all have serious side-effects. Is aggregative
ethics doomed? [Analysis and Metaphysics, Vol. 10 (2011): 9-59] [Original draft was available in 2003.] [html] [pdf]
- The Unilateralist's Curse: The Case for a Principle of Conformity
- In cases where several altruistic agents each have an opportunity to undertake some initiative, a phenomenon arises that is analogous to the winner's curse in auction theory. To combat this problem, we propose a principle of conformity. It has applications in technology policy and many other areas. [Working paper (2013) [w/ Anders Sandberg & Tom Douglas] [Social Epistemology, in press][pdf]
- Dignity
and Enhancement
- Does human enhancement threaten
our dignity as some have asserted? Or could our dignity perhaps be technologically
enhanced? After disentangling several different concepts of dignity,
this essay focuses on the idea of dignity as a quality (a kind of excellence
admitting of degrees). The interactions between enhancement and dignity
as a quality are complex and link into fundamental issues in ethics
and value theory. [In Human Dignity and Bioethics: Essays Commissioned by the President’s Council on Bioethics (Washington, D.C., 2008): 173-207] [pdf]
- In
Defense of Posthuman Dignity
- Brief paper, critiques a host
of bioconservative pundits who believe that enhancing human capacities
and extending human healthspan would undermine our dignity. [Bioethics,
Vol. 19, No. 3 (2005): 202-214] [translations: Italian,
Slovenian, Portugese]
[Was chosen for inclusion in a special anthology of
the best papers published in this journal in the past two decades] [html]
[pdf]
Human Enhancement Original essays by various prominent moral philosophers on the ethics of human enhancement. [Eds. Nick Bostrom & Julian Savulescu (Oxford University Press, 2009)].
Enhancement Ethics: The State of the Debate The introductory chapter from the book (w/ Julian Savulescu): 1-22 [pdf]
- Human
Genetic Enhancements: A Transhumanist Perspective
- A transhumanist ethical framework
for public policy regarding genetic enhancements, particularly human
germ-line genetic engineering [Journal
of Value Inquiry, Vol.
37, No. 4 (2003): 493-506] [html] [pdf]
- Ethical
Issues in Human Enhancement
- Anthology chapter on the ethics
of human enhancement [In New Waves in Applied Ethics, ed. Jesper
Ryberg et al. (Palgrave Macmillan, 2008): 120-152] [w/ Rebecca Roache] [html]
[pdf]
- The Ethics of Artificial Intelligence
- Overview of ethical issues raised by the possibility of creating intelligent machines. Questions relate both to ensuring such machines do not harm humans and to the moral status of the machines themselves. [In Cambridge Handbook of Artificial Intelligence, eds. William Ramsey & Keith Frankish (Cambridge University Press, forthcoming)] [w/ Eliezer Yudkowsky]
[pdf] [translation: Portugese]
- Ethical
Issues In Advanced Artificial Intelligence
- Some cursory notes; not very
in-depth. [Cognitive, Emotive and Ethical Aspects of Decision Making
in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et
al., Int. Institute of Advanced Studies in Systems Research and Cybernetics,
2003, 12-17] [html] [pdf] [translations: Italian]
- Smart
Policy: Cognitive Enhancement and the Public Interest
- Short article summarizing some of the key issues and offering specific recommendations, illustrating the opportunity and need for "smart policy": the integration into public policy of a broad-spectrum of approaches aimed at protecting and enhancing cognitive capacities and epistemic performance of individuals and institutions. [Enhancing Human Capacities, eds. J. Savulescu, R, ter Muelen, and G. Kahane (Wiley-Blackwell, 2011)] [w/ Rebecca Roache] [pdf]
- Recent
Developments in the Ethics, Science, and Politics of Life-Extension
- A review/commentary on The Fountain
of Youth (OUP, 2004). [Aging Horizons, No. 3
(2005): 28-34] [html] [pdf]
TRANSHUMANISM
- Letter
from Utopia
- The good life: just how good
could it be? A vision of the future from the future. [Studies in
Ethics, Law, and Technology, Vol. 2, No. 1 (2008): 1-7] [pdf is an improved version (2010), in Nexus Journal] [translations: French, Italian, Spanish]
[html] [pdf] [mp3]
- Why
I Want to be a Posthuman When I Grow Up
- After some definitions and conceptual
clarification, I argue for two theses. First, some posthuman modes of
being would be extremely worthwhile. Second, it could be good for human
beings to become posthuman. [Medical Enhancement and
Posthumanity, eds. Bert Gordijn and Ruth Chadwick (Springer, 2008): 107-137]
[pdf]
- The
Transhumanist FAQ
- The revised version
2.1. The document represents an effort to develop a broadly based consensus
articulation of the basics of responsible transhumanism. Some one hundred
people collaborated with me in creating this text. [translations: German, Hungarian, Dutch, Russian, Polish, Finnish, Greek, Italian]
[pdf]
Transhumanist
Values Wonderful ways of being may
be located in the "posthuman realm", but we can't reach them.
If we enhance ourselves using technology, however, we can go out there
and realize these values. This paper sketches a transhumanist axiology.
[Ethical Issues for the 21st Century, ed. Frederick
Adams, Philosophical Documentation Center Press, 2003; reprinted in Review of Contemporary Philosophy, Vol. 4, May (2005)] [translations: Polish, Portugese] [html] [pdf]
- A
History of Transhumanist Thought
- The human desire
to acquire new capacities, to extend life and overcome obstacles to
happiness is as ancient as the species itself. But transhumanism has
emerged gradually as a distinctive outlook, with no one person being
responsible for its present shape. Here's one account of how it happened.
[Journal of Evolution and Technology, Vol.14, No. 1 (2005) [translation: Spanish] [pdf]
|
RISK & THE FUTURE
- Where Are They? Why I hope the search for extraterrestrial life finds nothing
- Discusses the Fermi paradox, and explains why I hope we find no signs of life, whether extinct or still thriving, on Mars or anywhere else we look. [Technology Review, May/June issue (2008): 72-77] [pdf] [translations: Italian]
- Existential Risk Reduction as Global Priority
- Existential risks are those that threaten the entire future of humanity. This paper elaborates the concept of existential risk and its relation to basic issues in axiology and develops an improved classification scheme for such risks. It also describes some of the theoretical and practical challenges posed by various existential risks and suggests a new way of thinking about the ideal of sustainability. [Global Policy, Vol. 4, No. 3 (2013): 15-31] [translations: Portugese] [html] [pdf]
- How
Unlikely is a Doomsday Catastrophe?
- Examines the
risk from physics experiments and natural events to the local fabric
of spacetime. Argues that the Brookhaven report overlooks an observation
selection effect. Shows how this limitation can be overcome by using
data on planet formation rates. [w/ Max Tegmark] [expanded; Nature, Vol. 438 (2005): 754] [translations: Russian]
[pdf]
- The
Future of Humanity
- This paper discusses four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity. [In New Waves in Philosophy of Technology, eds. Jan-Kyrre Berg Olsen, Evan Selinger & Soren Riis (Palgrave McMillan, 2009) [pdf] [html]
Global Catastrophic Risks Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues—policy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees. [Eds. Nick Bostrom & Milan Cirkovic (Oxford University Press, 2008)]. Introduction chapter free here [pdf]
- The
Future of Human Evolution
- This paper explores some dystopian
scenarios where freewheeling evolutionary developments, while continuing
to produce complex and intelligent forms of organization, lead to the
gradual elimination of all forms of being worth caring about. We then
discuss how such outcomes could be avoided and argue that under certain
conditions the only possible remedy would be a globally coordinated
effort to control human evolution by adopting social policies that modify
the default fitness function of future life forms. [In Death and
Anti-Death, ed. Charles Tandy (Ria University Press, 2005)] [pdf] [html]
- Technological
Revolutions: Ethics and Policy in the Dark
- Technological revolutions are
among the most important things that happen to humanity. This paper
discusses some of the ethical and policy issues raised by anticipated
technological revolutions, such as nanotechnology. [In Nanoscale: Issues and Perspectives for the Nano Century, eds. Nigel M. de S. Cameron & M. Ellen Mitchell (John Wiley, 2007): 129-152] [pdf]
- Existential
Risks: Analyzing Human Extinction Scenarios and Related Hazards
- Existential
risks are ways in which we could screw up badly and permanently. Remarkably,
relatively little serious work has been done in this important area.
The point, of course, is not to welter in doom and gloom but to better
understand where the biggest dangers are so that we can develop strategies
for reducing them. [Journal
of Evolution and Technology, Vol. 9, No. 1 (2002)] [html] [pdf] [translations: Russian, Belorussian]
- Information Hazards: A Typology of Potential Harms from Knowledge
- Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. [Review of Contemporary Philosophy, Vol. 10 (2011): pp. 44-79 (first version: 2009)] [pdf]
- What
is a Singleton?
- Concept describing a kind of
social structure. [Linguistic and Philosophical Investigations, Vol. 5, No. 2 (2006): 48-54]
TECHNOLOGY ISSUES
- Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?
- The embryo selection during IVF can be vastly potentiated when the technology for stem-cell derived gametes becomes available for use in humans. This would enable iterated embryo selection (IES), compressing the effective generation time in a selection program from decades to months. [w/Carl Shulman] [Global Policy, Vol. 5, No. 1 (2014): 85-92] [pdf]
- How Hard is AI? Evolutionary Arguments and Selection Effects
- Some have argued that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. We evaluate this argument. [w/ Carl Shulman] [J. Consciousness Studies, Vol. 19, No. 7-8 (2012): 103-130] [pdf]
- The
Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement
- Human beings
are a marvel of evolved complexity. Such systems can be difficult to
enhance. Here we describe a heuristic for identifying and evaluating
the practicality, safety and efficacy of potential human enhancements,
based on evolutionary considerations. [w/ Anders Sandberg] [In Enhancing Humans, eds. Julian Savulescu and Nick Bostrom (Oxford
University Press, 2009): 365-416] [pdf]
- The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents
- Presents two theses, the orthogonality thesis and the instrumental convergence thesis, that help understand teh possible range of behavior of superintelligent agents - also pointing to some potential dangers in building such an agent. [Minds and Machines, Vol. 22 (2012): 71-84] [pdf] [translation: Portuguese]
- Whole Brain Emulation: A Roadmap
- A 130-page report on the technological prerequisites for whole brain emulation (aka "mind uploading"). (w/ Anders Sandberg) [Technical Report #2008-3, Future of Humanity Institute, Oxford University (2008)] [pdf]
- Converging
Cognitive Enhancements
- Cognitive enhancements in the
context of converging technologies. [Annals of the New York Academy
of Sciences, Vol. 1093 (2006): 201-207] [w/ Anders Sandberg]
[pdf]
- Hail Mary, Value Porosity, and Utility Diversification
- Some new ideas related to the challenge of endowing a hypothetical future superintelligent AI with values that would cause it to act in ways that are beneficial. Paper is somewhat obscure. [pdf]
- Racing to the Precipice: a Model of Artificial Intelligence Development
- Game theory model of a technology race to develop AI. Participants skimp on safety precautions to get there first. Analyzes factors that determine level of risk in the Nash equilibrium. [FHI Technical Report #2013-1] [w/ Stuart Armstrong & Carl Shulman]
[pdf]
- Thinking Inside the Box: Controlling and Using Oracle AI
- Preliminary survey of various issues related to the idea of using boxing methods to safely contain a superintelligent oracle AI.
[/w Stuart Armstrong and Anders Sandberg] [Minds and Machines, Vol. 22, No. 4 (2012): 299-324] [pdf]
- Future Progress in Artificial Intelligence: A Survey of Expert Opinion
- Some polling data.
[Fundamental Issues of Artificial Intelligence, ed. V. Müller (Synthese Library: Springer, 2014] [forthcoming] [/w Vincent Müller] [pdf]
- Cognitive
Enhancement: Methods, Ethics, Regulatory Challenges
- Cognitive enhancement
comes in many diverse forms. In this paper, we survey the current state
of the art in cognitive enhancement methods and consider their prospects
for the near-term future. We then review some of ethical issues arising
from these technologies. We conclude with a discussion of the challenges
for public policy and regulation created by present and anticipated
methods for cognitive enhancement. [w/ Anders Sandberg] [Science
and Engineering Ethics, Vol. 15 (2009): 311-341] [pdf]
Are You Living in a Computer Simulation?
This paper argues that at least
one of the following propositions is true: (1) the human species is
very likely to go extinct before reaching the posthuman stage; (2) any
posthuman civilization is extremely unlikely to run significant number
of simulations or (variations) of their evolutionary history; (3) we
are almost certainly living in a computer simulation. It follows that
the naïve transhumanist dogma that there is a significant chance that
we will one day become posthumans who run ancestor-simulations is false,
unless we are currently living in a simulation. A number of other consequences
of this result are also discussed. [Philosophical
Quarterly, Vol. 53, No. 211 (2003): 243-255] [pdf] [html]
[Also with a Reply
to Brian Weatherson's comments [Philosophical Quarterly,
Vol. 55, No. 218 (2009): 90-97; and a Reply to Anthony Brueckner, Analysis, Vol. 69, No. 3 (2009): 458-461] And a new paper w/ Marcin Kulczycki [Analysis, Vol. 71, No.1 (2011): 54-61] |
THE NEW BOOK
Superintelligence: Paths, Dangers, Strategies
This is the new book. Buy many copies now!
[Oxford University Press, 2014]
"I highly recommend this book."—Bill Gates
"terribly important ... groundbreaking"
"extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines—engineering, natural sciences, medicine, social sciences and philosophy—into a comprehensible whole"
"If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever."—Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era."—Stuart Russell, Professor of Computer Science, University of California, Berkley
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." —Martin Rees, Past President, Royal Society
"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." —Brian Clegg, Popular Science
"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." —Financial Times
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" —Professor Max Tegmark, MIT
"a damn hard read" —The Telegraph
ANTHROPICS & PROBABILITY
- Anthropic
Bias: Observation Selection Effects in Science and Philosophy
- Failure to consider observation
selection effects result in a kind of bias that infest many branches
of science and philosophy. This book presented the first mathematical
theory for how to correct for these biases. It also discusses some
implications for cosmology, evolutionary biology, game theory, the
foundations of quantum mechanics, the Doomsday argument, the Sleeping
Beauty problem, the search for extraterrestrial life, the question
of whether God exists, and traffic planning. [Complete book now available for free online; also out as paperback; there is also a brief primer.
[primer translations: Belarusian] [Routledge, 2002]
- Self-Locating
Belief in Big Worlds: Cosmology's Missing Link to Observation
- Current cosmological theories
say that the world is so big that all possible observations are
in fact made. But then, how can such theories be tested? What could
count as negative evidence? To answer that, we need to consider
observation selection effects. [Journal of Philosophy, Vol. 99, No. 12 (2002): 607-623] [html] [pdf]
- The
Mysteries of Self-Locating Belief and Anthropic Reasoning
- Summary of some of the difficulties
that a theory of observation selection effects faces and sketch
of a solution. [Harvard Review of Philosophy, Vol.
11, Spring (2003): 59-74] [pdf]
- Anthropic Shadow: Observation Selection Effects and Human Extinction Risks
- "Anthropic shadow" is an observation selection effect that prevent observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. We risk underestimating the risk of catastrophe types that lie in this shadow. (w/ Milan Cirkovic & Anders Sandberg) [Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506] [Won best paper of the year award by the journal editors] [translation: Russian] [pdf]
- Observation
Selection Effects, Measures, and Infinite Spacetimes
- An advanced Introduction
to observation selection theory and its application to the cosmological
fine-tuning problem [Universe
or Multiverse?, ed. Bernard Carr (Cambridge University Press,
2007)] [pdf]
- The
Doomsday argument and the Self-Indication Assumption: Reply to Olum
- Argues against Olum and the
Self-Indication Assumption. [Philosophical Quarterly, Vol.
53, No. 210 (2003): 83-91] [w/ Milan Cirkovic]
[pdf]
- The
Doomsday Argument is Alive and Kicking
- Have Korb and Oliver refuted
the doomsday argument? No. [Mind, Vol.108, No. 431 (1999): 539-550] [translations: Russian]
- The
Doomsday Argument, Adam & Eve, UN++, and Quantum Joe
- On the Doomsday argument
and related paradoxes. [Synthese, Vol. 127, No. 3 (2001):
359-387] [html] [pdf]
- A
Primer on the Doomsday argument
- The Doomsday
argument purports to prove, from basic probability theory and a
few seemingly innocuous empirical premises, that the risk that our
species will go extinct soon is much greater than previously thought.
My view is that the Doomsday argument is inconclusive - although
not for any trivial reason. In my
book, I argued that a theory of observation selection effects
is needed to explain where it goes wrong. [Colloquia Manilana
(PDCIS), Vol. 7 (1999)] [translations: Russian]
- Sleeping
Beauty and Self-Location: A Hybrid Model
- The Sleeping Beauty problem
is an important test stone for theories about self-locating belief.
I argue against both the traditional views on this problem and propose
a new synthetic approach. [Synthese,
Vol. 157, No. 1 (2007): 59-78] [pdf]
Cars
In the Other Lane Really Do Go Faster When driving on the motorway,
have you ever wondered about (and cursed!) the fact that cars in the
other lane seem to be getting ahead faster than you? One might be
tempted to account for
this by invoking Murphy's Law ("If anything can go wrong, it
will", discovered by Edward A. Murphy, Jr, in 1949). But there
is an alternative explanation, based on observational selection effects...
[PLUS, No. 17 (2001)]
Examines the implications
of recent evidence for a cosmological constant for the prospects of
indefinite information processing in the multiverse. Co-authored with
Milan M. Cirkovic. [Astrophysics and Space Science, Vol.
279, No. 4 (2000): 675-687] [pdf]
PHILOSOPHY OF MIND
-
-
If two brains are in identical
states, are there two numerically distinct phenomenal experiences
or only one? Two, I argue. But what happens in intermediary cases?
This paper looks in detail at this question and suggests that there
can be a fractional (non-integer) number of qualitatively identical
experiences. This has implications for what it is to implement a
computation and for Chalmer's Fading Qualia thought experiment.
[Minds and Machines, Vol. 16, No. 2 (2006): 185-200] [pdf]
DECISION THEORY
-
-
A self-undermining variant
of the Newcomb problem. [Analysis, Vol. 61, No. 4 (2001):
309-310] [html] [pdf]
-
|
|
| |