Emergent technologies often inspire great excitement attended by utopic visions of how they will transform our lives for the better. Yet all innovations introduce risk and the likelihood of unforeseen consequences. The transhumanity stack of technologies – life extension, medical & genetic modification, brain-computer & brain-machine interface, and virtual & augmented realities – offer great opportunities for human enhancement but pose profound risks for all aspects of humanity & civilization. It is critical to confront these dangers and temper the enthusiasm of tranhumanism with diligent risk assessment and thorough scenario modeling for possible outcomes.
To wit, here are 5 scenarios that explore the possible dangers embedded within transhumanism. This is, of course, by no means an exhaustive list but is simply intended to encourage further risk analysis. Most or all have probably been addressed by others elsewhere, and this list is not intended as a criticism of those presently active in the transhumanity community.
1. Population growth from longevity & senescence studies
Life extension looks great from an individual or group perspective but it’s a resource nightmare from a national and global angle. Current human population is about 6.8 billion with most linear estimates projecting somewhere around 9 billion by 2050. If life extension is designed to be readily available to anyone & everyone, we can expect two outcomes: considerable population growth as longevity outpaces mortality, and a rise in global GDP and its commensurate resource consumption as working age extends towards the centenarian. People living longer means people will consume more in the course of their lifetimes. Consider the competition for resources & ecological carrying capacity we currently face in 2010 and roll that forward 40 years with a massive global population and members of the workforce that can potentially stay employed for 70 years….
2. Inequity of technology distribution — the Transhuman Gap
The flip-side of the resource consumption issue arises if we admit that transhuman technologies will not be evenly available to all; that socioeconomic factors will gate who has access to technologies that extend human capabilities. In this context, population dynamics will not be appreciably influenced by human life extension as only a small subset of the populace will have access to such enhancement. Indeed, genetic modification, brain-computer interface, advanced prosthesis, and access to virtual & augmented realities are all presently gated by economic barriers to entry that are not likely to diminish any time soon. AR & VW’s may become ubiquitous & cheap but real human enhancement through interventionist technologies will mostly fall along class lines, giving rise to a wealthy tier of augmented & enhanced individuals. If only the wealthy are most able to afford enhancement, the socioeconomic divide will be reinforced by the Transhuman Gap, further disenfranchising those already at a competitive disadvantage by their class circumstances. From such economic disparity, reinforced by the inevitable moralizing and judgments from both sides of the gap, social cohesion will be further challenged and class distinctions will begin to take on a bio-mechanical & genetic aspect with the threat of technology-enabled superiority.
3. Techno-elitism, civil discord, and eugenics
Throughout history elite classes have used their status & abilities to influence the control systems that govern those beneath them. Likewise, the underclass has looked at elites with both admiration & disdain, occasionally rising to join their ranks but, more often, rising up to knock them down. Civil strife is a common outcome of disparity, driven by inequities in access to resources, opportunities, and power. A class of techno-elite transhumans would pose a profound existential threat to the underclass who might very well perceive themselves as being forever cut-out from the Democratic ideal that “all humans are created equal”, no longer able to compete in any capacity without transhuman enhancements. The anger and victimization from such an outlook would very quickly translate into moralizing against the crimes of human augmentation and stigmatizing those who pursue such “un-natural” and “un-holy” enhancement. In turn, the techno-elite may feel inclined to judge the underclass as “unfit” or “un-evolved” – two distinctions that have historically led to great atrocities.
The slippery slope of this scenario posits the rise of a transhuman ruling class who, when challenged by the underclass, recede into their own sense of authority & enhanced intelligence to determine that the only appropriate course of action is to subjugate the masses and shepherd the rise of transhuman governance. If transhuman enhancement is truly advantageous, yet remains available only to an elite class, then in all likelihood those elites will embrace the technology to their competitive advantage. Since it would be folly to assume that human technological enhancement will remediate our most basest evolutionary program of survival of the fittest, the likelihood of enhanced predatory elites seizing global power is not so small. The darkest scenario might see transhuman governance requiring control & tracking implants in all newborns – perhaps a bit hyperbolic but not inconceivable if the type of global predators that currently traverse societies gained access to advanced transhuman technologies.
5. Fractured reality
Virtual worlds and augmented reality offer many compelling experiences across the spectrum of entertainment, socialization, marketing & advertising, collaboration, and modern knowledge work. At their core, these technologies intermediate our experience of the world, giving third parties access to program our sensorium. Brain-computer interface technologies are working to extend this access to the core structures of our brain, kicking off a wave of neurotechnologies able to more specifically & accurately influencing the mind-brain interface. The opt-in path through designer reality gives us the ability to modify the way we interface with the phenomenal world, electing to commit more of our selves to virtual experiences & relationships, or to overlay our environments with the images of our choosing rather than confront the physical world solely on its terms. While affinity groups will accrete around specific worlds & layers the barriers between differing experiences of objective reality will multiply when the world I experience is markedly different than yours. As the Transhuman Gap threatens social cohesion through class, reality design threatens cohesion across all classes by erecting virtual constructions between adjacent-but-unrelated digital worlds. While we may feel a sense of agency in creating such personalized experiences we do so in digital layers most likely owned by 3rd parties or accessible through public APIs. We may inadvertently wall ourselves off from each other but we’ll become even richer targets for profilers, influencers, and governors. The slippery slope in this scenario suggests that governance might enforce realities onto subjects or that dangerous identity groups might create monstrous, all-encompassing layers as indoctrination tools & neuro-propaganda towards the engineering of social movements. Considering how supremely the television has been used to influence the masses with only basic access to eyes and ears, it’s not unlikely that greater access into the transhuman will yield a greater ability to influence and manipulate.
6.Robotics can be hacked, exploit to kill people, spy on military secrets
There are a lot of conspiracy theories about robots taking over our jobs or killing the humanity. In fact, famous physicist Professor Stephan Hawking agrees with researchers who claim AI robots will leave humanity ‘‘Utterly Defenceless.’’ Now, researchers at IOActive, a cyber security company has revealed that [Pdf] programs which “bring them (robots) to life” carry critical vulnerabilities which can be used by threat actors for negative purposes.The development of artificial intelligence (AI) robots is on the rise. Last year, IBM developed Ross – World’s first AI lawyer and plans are to license it for being utilized in domains like bankruptcy, restructuring and creditors’ rights team. The US government also wants to put robots in the military and weaponize them with artificial intelligence. That being said, the cyber criminals are also keeping an eye on the situation and exploiting existing vulnerabilities in the infrastructure of robots can turn the table for all the wrong reasons. IOActive’s researchers tested models from a number of vendors including SoftBank Robotics’s NAO and Pepper robots, UBTECH Robotics’s Alpha 1S and Alpha 2 robots, ROBOTIS’s ROBOTIS OP2 and THORMANG3 robots, Universal Robots’s UR3, UR5, UR10 robots’ Rethink Robotics’s Baxter and Sawyer robots, Asratec Corp’s robots using the affected V-Sido technology. Among their findings, the researchers discovered authentication issues, insecure communication system, weak cryptography, privacy flaws, weak default configuration, vulnerabilities in open source robot frameworks and libraries.
Damages that can be caused by a hacked robot:The research further revealed that after exploiting above mentioned vulnerabilities attackers could use a hacked robot to spy on people, homes, offices and even cause physical damage. This makes a perfect scenario for government-backed spying groups to keep an eye on military and strategic places once and if the target country is using robots in its military or sensitive installations. It a nutshell, the research covers every aspect of life where robots can be used in the future and cause massive damage including homes, military and law enforcement, healthcare, industrial infrastructure, and businesses. “Compromised robots could even hurt family members and pets with sudden, unexpected movements since hacked robots can bypass safety protections that limit movements,” says the research. “Hacked robots could start fires in a kitchen by tampering with electricity, or potentially poison family members and pets by mixing toxic substances in with food or drinks. “Family members and pets could be in further peril if a hacked robot were able to grab and manipulate sharp objects,” it adds. Another dangerous aspect discovered in this research is that thieves and burglars can also hack Internet-connected home robots and direct them to open doors. Even if robots are not integrated, they could still interact with voice assistants, such as Alexa or Siri, which integrate with home automation and alarm systems.
“A hacked, inoperable robot could be a lost investment to its owner, as tools are not yet readily available to ‘clean’ malware from a hacked robot,” it adds. “Once a home robot is hacked, it’s no longer the family’s robot; it’s essentially the attacker’s.” Previous cases of damages done by robots:Last year, a 5-foot-tall and 300-pound Knightscope security robot at the Stanford Shopping Center, California knocked down a 16-month-old boy and ran him over. Last year again, a humanoid-looking robot in Russia fled after figuring out that the engineers forgot to shut the gates and blocked the traffic. In 2015, a technician at the Germany-based Volkswagen production plant was killed by a robot however in that case investigators blamed human error rather than a robot behind the killing of the technician.
7.Enlightenment and Reason
Reason was the central value of the Enlightenment. Some historians see the beginning of the Enlightenment in the early seventeenth century “Age of Reason,” associated with the Descartes, Spinoza, Leibniz, Hobbes, Locke, and Berkeley. Historian Dorinda Outram defined the central claims of the Enlightenment around its appeal to reason: Enlightenment was a desire for human affairs to be guided by rationality rather than by faith, superstition, or revelation; a belief in the power of human reason to change society and liberate the individual from the restraints of custom or arbitrary authority; all backed up by a world view increasingly validated by science rather than by religion or tradition. (Outram, 1995: 3) When Kant wrote his essay (1784a) “Was ist Aufklärung” or “What is Enlightenment?” for the Berlinische Monatschrift, he summed up the slogan of the Enlightenment as “sapere aude” or “dare to know.” Though divided by epistemology and theology, these thinkers attempted to ground philosophy on incontestable propositions such as “cogito ergo sum.” This thorough-going undermining of all irrational a prioris led to a number of philosophical dead-ends, however, immediately generating a score of post-rationalist movements. In the midst of the Enlightenment, Jean-Jacques Rousseau valorized the primitive and decried the harmful effects of hyper-rationalism on morality (Glendon, 1999). After all, as Hume underlined, the Enlightenment had severed any connection between the IS and the OUGHT. Although Kant and the utilitarians would attempt to re-ground ethics on what appeared to be empirical observations about human nature, they could never answer the next question: why should ethics be grounded on observations about human nature and not something else, like ancient religious dogmas? Eighteenth century Romanticism was also a reaction to the overreach of reason in its assertion of the value of aesthetic and emotional experience. From the eighteenth century through World War Two, movements on both the right and left turned against Enlightenment rationalism. On the Left, the Frankfurt School writers criticized the Enlightenment’s instrumental rationality for its complicity in authoritarianism (Adorno and Horkheimer, 2006; Marcuse, 1964; Saul, 1992; Gray, 1995). Various strains of feminism and anti-imperialism attacked the patriarchal and Eurocentric construction of Enlightenment reason (Harding, 1982). These post-rationalist movements rejected the autonomy and universality of reason because it came into conflict with other values of the Enlightenment, such as respect for the rights of persons and for cultural diversity. Meanwhile, theologians and philosophers of the Right blamed communism on the totalizing logic of the Enlightenment’s assertion of utopian reason. In the 20th century, Enlightenment rationalism also began to question its own first principles. One example is found in Wittgenstein’s turn from logical positivism. The logical positivists attempted to ban from philosophical discourse all terms and concepts without empirical referents. Ludwig Wittgenstein, although an early and influential advocate of this position, eventually changed his mind as he further investigated how language actually worked. Having turned empirical investigation on the process of reasoning itself, and attempting to purify language of all irrationality, Wittgenstein concluded that the goal was chimerical (Wittgenstein, 1953). Language is a series of word games in which meanings are created only in reference to other words and not to empirical facts. The positivist project of building a rational philosophy from uncontestable empirical observations is impossible. Foucault, Derrida, and the postmodernists also represent an implosion of Enlightenment reason. Although I believe postmodernist “criticism” to be mostly a dead end, the essential insight is true: all claims for Enlightenment reason are historically situated and biased by power and position. The Enlightenment is just one historical narrative among many and there is no rational reason to choose the Enlightenment narrative over any other. Reason can only be argued for from metaphysical and ethical a prioris, even if those are only such basic assumptions as ‘it is good to be able to accomplish one’s intended goals.’ Most tangibly, contemporary neuroscience, also a product of Enlightenment reason, now recognizes that reason severed from emotion is impotent. In Damasio’s (1994) now classic studies of patients with brain damage that severed the ties between emotion and decision-making, the victims were incapable of making decisions. The desire to stop deliberating and make a decision is not itself rational-it is a product of temperament. Reason was built to serve, but is incapable of generating its own commands.
7A. Transhumanists and Reason
Most transhumanists argue the Enlightenment case for Reason without acknowledging its self-undermining nature. For instance Max More’s Extropian Principles codified “rational thinking” as one of its seven precepts (More, 1998): Like humanists, transhumanists favor reason, progress, and values centered on our well being rather than on an external religious authority. (More, 1998) The Transhumanist FAQ defines transhumanism as the consistent application of reason: The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason…We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings… Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. (Humanity+, 2003) One of the central transhumanist blogs is Less Wrong, based at Oxford University under the aegis of transhumanist philosopher Nick Bostrom and dedicated to “the art of refining human rationality.” A frequent contributor there is Eliezer Yudkowsky, an auto-didact writer on artificial intelligence and human cognitive biases who also is a co-founder of the Singularity Institute for Artificial Intelligence. Yudkowsky has said that one of his goals is to lead a “mass movement to train people to be black-belt rationalists.” The Less Wrong blog highlights Yudkowsky’s definitions of rationality and their importance as its raison d’etre:
What Do We Mean By “Rationality”?
We mean: 1. Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that. 2. Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning.”
But why should we want a map that corresponds to the territory? Where do the values that rationality help us achieve come from? What if the valuation of instrumental rationality in fact is an obstacle to achieving the things we value, as the romantics claim, such as beauty, meaning, contentment, and awe? Yudkowsky goes so far as to acknowledge the problem in order to define it as something that is simply not to be discussed: … many of us will regard as controversial—at the very least—any construal of “rationality” that makes it non-normative. For example, if you say, “The rational belief is X, but the true belief is Y” then you are probably using the word “rational” in a way that means something other than what most of us have in mind… Similarly, if you find yourself saying, “The rational thing to do is X, but the right thing to do is Y” then you are almost certainly using one of the words “rational” or “right” in a way that a huge chunk of readers won’t agree with. Fortunately for Yudkowsky, he has been ceded authority by his readers to write off all philosophical debate about the relationship of IS and OUGHT. But this will leave his transhumanist rationality experts defenseless debating those with different metaphysics, or when they face their own dark nights of the soul. One of the central philosophical debates between bioconservatives and transhumanists, and “bioliberals” more generally, over the last two decades has been over the legitimacy of emotivist arguments such as Leon Kass’ (1997) “wisdom of repugnance” (Roache and Clarke, 2009). In 2003, the bioconservative Yuval Levin wrote in “The Paradox of Conservative Bioethics” of the tragic dilemma faced by conservatives trying to devise rational arguments in defense of irrational taboos. Once liberal democracy forces the conservative to abandon appeals to tradition or intuition, democratic debate naturalizes the new. The very fact that everything must be laid out in the open in the democratic age is destructive of the reverence that gives moral intuition its authority. A deep moral taboo cannot simply become another option among others, which argues its case in the market place. Entering the market and laying out its wares takes away from its venerated stature, and its stature is the key to its authority. By the very fact that it becomes open to dispute—its pros and cons tallied up and counted—the taboo slowly ceases to exist… A conservative bioethics…is forced to proceed by pulling up its own roots, and to begin by violating some of the very principles it seeks to defend. (Levin, 2003) Transhumanists and the Enlightenment face the opposite dilemma: how to advocate for rationality in a way that avoids its potential for self-erosion. Just as the bioconservatives cannot validate their taboos and ethical a prioris in the public square, there is likewise no rational reason why society should reject taboos and superstition in favor of a transhuman future; value judgments in favor of tradition, faith, and taboo, or in favor of progress, reason, and liberty both stem from pre-rational premises. Transhumanists need to acknowledge their own historical situatedness and defend their normative and epistemological first principles as existential choices instead of empirical absolutes somehow derived from reason. One example of a transhumanist acknowledging the pre-rational roots of transhumanist values is anti-aging activist and IEET Fellow Aubrey de Grey’s 2008 essay “Reasons and methods for promoting our duty to extend healthy life indefinitely.” De Grey directly addresses Leon Kass’ emotivist argument and turns it on its head. What, de Grey asks, is more repugnant than sickness, aging, and death? Those arguing the anti-aging cause, de Grey concludes, should start from these shared intuitions and prejudices instead of starting from reasoned arguments that presume the “objectivity of morality” and the “unreliability of gut feelings.” When I first heard de Grey’s argument, I demurred, thinking he had given away too much to the emotivists. But that was simply my own fear of letting go of my superior rational ethical viewpoint. When I imagine the project of Reason, I think of building a house in mid-air. I look over at the other houses floating in mid-air, the pre-Enlightenment houses, and they are ramshackle huts of mud daub and random flotsam, tied up with string. To get from one room to another in our neighbors’ houses, you have to crawl to the basement and then up a laundry chute. They sit in darkened rooms with few windows, and none that show that the house is not in fact rooted to the earth. With the pure, lean precision of Reason we have built our houses of Kantianism, utilitarianism, liberal democracy, and other clean architectural marvels, Frank Lloyd Wright structures of thought with lots of windows, and even glass floors. But most of us steadfastly ignore the fact that, just like our neighbors, we are floating in mid-air. Acknowledging that we are all in mid-air and don’t know how we got aloft in the first place is damned scary, and we have repeatedly seen people defect from our Enlightenment houses with glass floors to our neighbors’ houses of faith and dogma where they are not forced to look down. We need to learn the courage to acknowledge that we got this thing in the air through an act of will—that Reason is a good tool but that our values and moral codes are not grounded in Reason—or else we will lose many more people to the forces of irrationality in the future.
A central theme of the Enlightenment was religious tolerance and skepticism about superstition and Biblical literalism. However, most of the Enlightenment philosophers of the 17th century through the 19th century were theists of some sort. In general they were attempting to reconcile belief in God with rational skepticism and naturalism. One common theological stance of the Enlightenment thinkers was Deism. Deists rejected blind faith and organized religion, and advocated the discovery of religious truth through reason and direct empirical observation. Deists believed divine intervention in human affairs stopped with the creation of the world. They rejected miracles, the inerrancy of scripture, and doctrines such as the triune nature of the Christian God (trinitarianism). Deists like Thomas Jefferson, Thomas Paine (1794), and Benjamin Franklin helped establish the separation of church and state in the new United States, arguing that doctrinal differences were irrelevant to good citizenship. Deism declined in the nineteenth century, gradually replaced by atheist materialism. But the engagement with Enlightenment values continued in liberal strains of Christianity such as Unitarianism and Universalism. Many of these attempts to root theology in Enlightenment rationalism fall flat on modern ears and consequently are seen today as transitional to atheism, or even as insincere covers for an underlying atheism that could not yet speak its name. Certainly many orthodox believers also accused the Enlightenment-influenced theologians of being closet atheists. I grew up in the Unitarian-Universalist church, to which I still belong. Its attempt to run spirituality through the rationalist Enlightenment sieve removed God seventy years ago, leaving mostly vague affirmations. As a consequence UU-ism has grown only slowly, and is often a way station for families moving from traditional religions to atheism. Sociologically, the decline of liberal churches and the rise of fundamentalism has seemed to prove that liberal religion is incoherent, that one either needs to check one’s brain at the door or become an atheist. I believe, however, that we need to take more seriously the effort of Enlightenment theologians to argue for a naturalist theology. Although their previous efforts to affirm some form of deity through the rational, scientific investigation of nature may have failed, naturalistic theology may finally have found solid Enlightenment footing in modern transhumanist speculations about the transcendent powers of superintelligent beings.
Transhumanists and Religion
Self-identified transhumanists today are mostly secular and atheist. In a survey conducted in 2007 of members of the World Transhumanist Association (Humanity+, 2008), 93% answered ‘yes’ to the statement “Do you expect human progress to result from human accomplishment rather than divine intervention, grace, or redemption?” Ninety percent denied “clear divinely-set limits on what humans should do,” and ninety percent affirmed that their “concept of ‘the meaning of life’ derived from human responsibility and opportunity, not than from divine revelation.” When those transhumanists were asked for religious affiliations, two-thirds identified as atheist, agnostic, secular humanist, or non-theist. On the other hand, a quarter self-identified as religious of some sort, including Christian (8%), spiritual (5%), Buddhist (4%), religious humanist (2%), as well as pagans, Hindus, Jews, Muslims, and other faiths. Echoing Goldberg’s 2009 thesis that transhumanism is itself a religious point of view, about 1% of transhumanists listed transhumanism as their religion. So while transhumanism reflects the atheist trajectory of the Enlightenment for most of its adherents, for up to fifteen percent or so some concept of God is compatible with their transhumanism. (For a fuller parsing out of the religious views of transhumanists, please see my essay 2007 essay on the compatibility of religion and transhumanism.)
Intriguingly, 1% of respondents to that survey offered “pantheist” or “scientific pantheist” as either a religious or secular philosophy. Pantheism appears to have become popular because of the belief among some transhumanists in panpsychism, the idea that all matter in the universe partakes of consciousness (Goertzel, 2004; Rucker, 2007). This conjecture emerges out of the ideas that consciousness is an emergent property of matter, and that matter is a form of computation, articulated for instance by Stephen Wolfram. Even if all matter in the universe is not currently suffused with consciousness, the transhumanist belief in the inevitable progress of intelligence and the ability of science to ultimately control all matter generates its own form of teleological theology similar to Pierre Teilhard de Chardin’s notion of humanity’s evolution into an Omega Point (de Chardin, 1955, 1959; Steinhart, 2008). One early example of such transhumanist theological teleology or “cosmotheism” was Frank Tipler’s (1995) argument for a resurrection of the dead at the universe’s end. Tipler assumed the universe would eventually stop expanding and end in a Big Crunch, allowing subjectively eternal supercomputation within the accretion ring of the Crunching black hole. One of the things that could be accomplished at that point would be the “resurrection” of every intelligent creature, or even every living thing, that had ever existed. In the last decade it has become clear that the universe is likely to continue expanding and dissipate in heat death. Nonetheless, smaller versions of these simulated heavens could be created in the matter around the black holes at the centers of galaxies.
A more minimalist version of cosmotheology is found in Nick Bostrom’s (2003) “simulation hypothesis.” Bostrom proposes that if the universe generates vast superintelligences with billions of years to amuse themselves, one of their activities might be the creation of simulated civilizations. Given the vast numbers of potential simulators, their vast computing resources, and the vast numbers of years to entertain themselves, and therefore the vast number of simulations they will likely run, the likelihood is that there are a vastly larger number of simulations of lived realities than actual lived realities. Therefore we are probably living in a simulation.
Many people have pointed out the similarity between this skeptical view of sense data and earlier theological views. For instance Rene Descartes begins his meditations with three reasons to doubt our senses: (a) that we could be dreaming; (b) that we could be living in a deceptive reality created by God; and (c) that we could be living in a deceptive reality created by an evil demon. Similarly, Bishop Berkeley prefigured the quantum observation effect by proposing that reality doesn’t exist unless it is perceived by a mind, and that the reason that our reality persists around us when we aren’t looking is because it all is being perceived within the mind of God. David Hume grappled with these skeptical challenges to epistemology and concluded there was no way to prove we were actually in reality, so we might as well ignore the question. (See the excellent Wikipedia page on Simulism for more discussion of these parallels.) Bostrom disputes the similarity between his argument and these prior theological and epistemological arguments. It has no direct connection with religious conceptions of a literally omniscient and omnipotent deity. The simulation-hypothesis does not imply the existence of such a deity, nor does it imply its non-existence. The simulators who created us in this naturalistic theology would be importantly different from the traditional Creator of Christianity. Our Simulator(s)
would be naturalistic entities, subject to the laws of nature at their own level of reality; they would not be strictly omniscient or omnipotent, and they might well be finite. On the other hand, they would be able to monitor everything that happens here, and they would be able to intervene in ways that conflict with the simulated default laws of nature. Moreover, they would presumably be superintelligent (in order to be able to create such a simulation in the first place). An afterlife in a different simulation or at a different level of reality after death-in-the-simulation would be a real possibility. It is even conceivable that the simulators might reward or punish their simulated creatures based on how they behave, perhaps according to familiar moral or religious norms (a possibility that gains a little bit of credibility from the possibility that the simulators might be the descendants of earlier humans who recognized these norms)…. So the simulation hypothesis, working from naturalistic assumptions to naturalistic conclusions, ends up as an argument for a kind of naturalistic God that may perform miracles, reward and punish behavior, and grant an afterlife or reincarnation.
The Order of Cosmic Engineers
Another version of transhumanist cosmotheism is found in the “Order of Cosmic Engineers” (OCE). The OCE describes itself as a transhumanist spiritual movement that foresees a future in which intelligence engineers the universe and becomes godlike. They distinguish between belief in a “supernatural” god, and belief in inevitable natural superintelligent, superpowerful gods...(in the) very far future one or more natural entities—i.e. entities existing within our present universe—are highly likely to come into being—plausibly resulting from the agency of our and other species—which will to all intents and purposes be very much akin to “god” conceptions held by theist religions. We refer to conceptions of personal, omnipotent, omniscient and omnipresent super-beings, “deities” or “gods”. (OCE, 2009) These natural gods might in fact already exist, produced by prior civilizations, or might be able to reach back from our future to influence the past. Religious beliefs in gods, the OCE contends, might simply be a primitive apperception of these superbeings. The OCE, following Gardner (2007), Lanza and Berman (2009), also suggest that these superbeings might have the power to shape our universe, or create new universes specifically designed for life. They may then have dissolved themselves or diffused themselves into our universe at the moment of creation. The perfusing of intelligence into the universe will therefore lead to the re-connection with or (re-)creation of these godlike beings. The OCE views as its ultimate, very long-term aspiration—its cosmic-scale mission if you like—the permeating of this universe—by means of cosmic engineering interventions such as so-called ‘computronium’—with benign intelligence. We see the perfusing of our universe with benign intelligence as a step towards the (re-)constitution or (re-)integration of (possibly hive-like) “societies of mind” or “global brains”. These in turn would ultimately evolve into—a possibly new and ever so slightly improved version of—these ‘original’ god-like super-beings.
Is Naturalistic Trans-Spirituality Compatible With New Atheism?
The IEET, like the transhumanist movement, tilts towards atheism. IEET Fellow Russell Blackford and IEET Managing Director Mike Treder argue passionately that advocating for atheism is a central responsibility for partisans of Enlightenment values today. Nonetheless we also embody some of these contradictory tendencies. Our Chair Nick Bostrom articulated the simulation hypothesis. IEET Fellow and Humanity+ Chair Ben Goertzel is a self-identified panpsychist. IEET Trustee Martine Rothblatt and IEET Board member Giulio Prisco are stalwarts of the Order of Cosmic Engineers. IEET Board members George Dvorsky, Mike LaTorra, and I are atheist Buddhists, pursuing our “Cyborg Buddha” project of trying to integrate neurotechnologies with a spirituality grounded in naturalism, an effort that we share with New Atheist Sam Harris. Do any of these positions represent a backsliding towards irrationalism, a compromising of the core Enlightenment commitment to scientific naturalism? In principle, no. Naturalist predicates and arguments, coupled with an openness to transhumanist conclusions, are leading to new scientific theologies and spiritualities. Since this tension between the atheist, anti-spiritualist wing and the natural theology wing is already three hundred years old, however, it seems like it will probably not be resolved any time soon.
The Enlightenment rationale for liberalism, most powerfully articulated in Mill’s On Liberty, was that if individuals are given liberty they will generally know how to pursue their interests and potentials better than will anyone else. So, society generally will become richer and more intelligent if individuals are free to choose their own life ends rather than if they are forced towards betterment by the powers that be. In order to ensure that all interests and views of the good are equally weighed in the marketplace of ideas and expressed in collective decision-making, society should guarantee free debate and equal legal and political empowerment. The most radical expression of these ideals was liberal and social democracy, which are often assumed to be the consensual political ideal of the Enlightenment. In fact, Enlightenment philosophers were intensely conflicted about the virtues of powerful monarchies and technocratic elites versus popular democracy. Some believed an absolute state was the best form of governance. Thomas Hobbes argued that political absolutism was necessary to prevent the war of “all against all.” Voltaire said that he “would rather obey one lion, than 200 rats of [his own] species.” Other Enlightenment thinkers argued against absolutism and the divine right of kings, but held out for the desirability of “enlightened despots” who had political legitimacy because they were pursuing their people’s interests. Free peoples, as individuals and democracies, often do not choose the ends that are in their best interests. As Spinoza said, “the masses can no more be freed from their superstition than from their fears…they are not guided by reason” (Spinoza, 1670: 56). The benevolent rationale for authoritarianism has always been that rulers and their advisors understand the needs of the people better than the people do themselves. Before the Enlightenment, the alleged source of this superior understanding was the rulers’ wisdom and spiritual guidance. After the Enlightenment, the idea that some people were more or less advanced on the path of reason and progress than others lent itself to justifications for enlightened monarchy, colonialism, and scientific dictatorships. Most Enlightenment philosophers placed their hopes for progress in the benign governance by modernizing monarchs and reformed aristocrats, certainly not in radicalized peasants. If society needs to be rationally re-organized it is far more straightforward to make existing elites and monarchs the agents of Reason than to try to convert the masses and establish Reason from the bottom up; once society is rationally reorganized from the top, the masses will find their way to Reason that much more easily—or so the argument goes. A number of monarchs, such as Frederick II of Prussia, Joseph II of Austria, and Peter the Great and Catherine the Great of Russia, were directly influenced by and friendly towards the Enlightenment. These enlightened absolutists believed that the monarchical state could embody and advance the new science and Reason. They promoted public education, social reform, and the modernization of laws, economies, and militaries (Outram, 2005). Frederick II of Prussia promoted religious tolerance and abolished serfdom. Joseph II centralized the Austrian state, restricted the power of the Catholic Church, and abolished serfdom. The American revolution was a step forward from enlightened despotism in Enlightenment political thought. But the founders of the American republic also were almost all suspicious of “mobocracy,” and the American state is carefully constructed to cripple direct democracy. The separation of judicial and executive power from legislative power, following the ideas of the Baron de Montesquieu, ensured that the wisdom of landed male elites would temper the passions of the mob as they continue to do today. Even within the legislative branch, the Senate was a landowners’ body, originally appointed by state legislatures, to check the potential of radical populism from the House. For two hundred years, Counter-Enlightenment thinkers have argued that the French Revolution’s descent into Terror and the Marxist-Leninist totalitarianism of the 20th century each were a natural consequence of the Enlightenment’s attempt to apply rationality to governance, ignoring the fact that the liberal tradition is as much a product of the Enlightenment. In its own violent way, however, the French revolutionary government represented a mix of both popular democratic and elite authoritarian reforms. The expansion of democratic rights under the Assembly was combined with political executions directed by elites and unpopular top-down reforms. The legacy of enlightened despotism is actually found far less ambiguously in the reign of modernizing dictators like Napolean Bonaparte and his many successors through to today like Vladimir Putin. Bonaparte established schools and scholarships to attend them. He promoted meritocracy and thoroughly rationalized French law in a way that institutionalized Enlightenment values of universalism and egalitarianism. He promoted religious tolerance and ended the hostility between Church and state by putting the clergy on the state payroll. The conflicts within the Enlightenment tradition between absolutism and liberalism are not only found on the Left but also on the Right among latter-day Bonapartists, right-wing modernizing dictators. Enlightenment arguments for benevolent modernizing dictatorships also were used to rationalize French and British colonialism and the expansion of both the Soviet Union and Pax Americana. Bentham, Condorcet, Diderot, Kant, and Adam Smith were all early critics of imperialism (Muthu, 2003), but even their attacks on Western arrogance and exploitation were muted by their support for ethical universalism, which hoped to see everyone eventually benefit from the Enlightenment. Since de-colonization and the rise of Vietnam era anti-imperialism, arguments for beneficial, enlightening colonialism sound like thin excuses for exploitation, unless you are a fan of the U.S. occupation of Iraq. But both respect for the noble savages and their national self-determination and the idea that primitive peoples could benefit from a period of tutelage by the enlightened nations are woven together throughout the history of Enlightenment thought.
Transhumanist Liberalism vs. Transhumanist Technocracy
Transhumanists are overwhelmingly and staunchly civil libertarian, defenders of juridical equality and individual rights. Most also believe democratic government to be superior to any of the extant alternatives. But many are also suspicious of the capacity of ordinary people to make decisions that are truly in their own interests, individually or as polities. Some transhumanists explicitly argue that rather than try to win popular support for transhumanist values, far more can be accomplished by winning over powerful elites. The 2005 and 2007 surveys of the members of the World Transhumanist Association (WTA, 2005; WTA, 2007) asked, “Although we may devise better political systems in the future, do you believe that multi-party democracies with civil liberties for individuals are the best of the existing political orders?” A third of the respondents were unwilling to affirm the superiority of liberal democracy among existing political systems. Transhumanist Max More, for instance, looks toward a post-democratic minarchy: Democratic arrangements have no intrinsic value; they have value only to the extent that they enable us to achieve shared goals while protecting our freedom. Surely, as we strive to transcend the biological limitations of human nature, we can also improve upon monkey politics? (More, 2004) Billionaire transhumanist Peter Thiel (2008) hopes that anarchist utopias at sea, in outer space, or in cyberspace can escape the authoritarian clutches of the democracies: ... the great task for libertarians is to find an escape from politics in all its forms — from the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called “social democracy.” The critical question then becomes one of means, of how to escape not via politics but beyond it. Because there are no truly free places left in our world, I suspect that the mode for escape must involve some sort of new and hitherto untried process that leads us to some undiscovered country; and for this reason I have focused my efforts on new technologies that may create a new space for freedom. (Thiel, 2008) Libertarian transhumanists like Thiel and More are consistent critics of all forms of governance and have never advocated enlightened despotism. However, the belief that mob democracy is hopeless and that the only avenue for progress lies with elites and unbridled technological change does support anti-democratic authoritarian views among some transhumanists. One of the transhumanist forebears it is important to keep in mind when considering transhumanist ambivalence about liberal democracy is H.G. Wells. Wells was a Fabian socialist, an advocate for the evolution of liberal democracies toward democratic socialism. But he also believed that this evolutionary process would be accelerated by global war and catastrophe. In his classic 1933 The Shape of Things to Come, a technocratic world government is established in the wake of a global nuclear war. The new “Dictatorship of the Air” rules benevolently for a hundred years, eradicating religion and promoting science, until it is overthrown and the state withers away. For Wells, as for many transhumanists, the urgent catastrophic risks humanity faces trumps any preference for liberal democracy. For instance, in considering how best to awaken and prepare society for global catastrophic risks, such as the emergence of machine minds, Eliezer Yudkowsky considers attempts to convince people of the risks, and dismisses them: Majoritarian strategies take substantial time and enormous effort… (it is) vastly easier to obtain a hundred million dollars of funding than to push through a global political change. (Yudkowsky, 2008) In particular, it is supposed, a hundred million dollars from Peter Thiel put toward the project of making a benevolent super-AI will do far more to improve the world than any political movement, since the first super-AI will, in Yudkowsky’s view, be the last form of government humans will ever know. AI is either the solution to all of humanity’s problems, or its final solution. Nick Bostrom also has argued the need for a global “singleton” to mitigate “existential risks” (Bostrom, 2001), though he is far more open-minded about the possible nature of the global dictator than is Yudkowsky. Global government of some kind, Bostrom argues, is necessary in order to mitigate threats such as nuclear war and bioterrorism, but also in order to avoid humanity’s unthinking evolution into something we might regret. For instance, international competition might encourage the engineering of workers for some form of hyper-capitalism, while a global government of some kind could impose restrictions on this kind of competition and guide global civilization past these shoals. A singleton does not need to be a monolith. It can contain within itself a highly diverse ecology of independent groups and individuals. A singleton could for example be a democratic world government or a friendly superintelligence. (Bostrom, 2001) In his subsequent “What is a Singleton?” (Bostrom, 2006), Bostrom defines the singleton as: A world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation). He again specifies that a singleton could be a democratic world republic, a dictatorship, or a superpowerful intelligent machine or posthuman. Such a global agency would be able to suppress wars and arms races, protect our common planetary and solar system resources from wasteful competition, relieve inequality, and establish a more rational economy. Technological innovations such as “improved surveillance, mind-control technologies, communication technologies, and artificial intelligence,” as well as the proliferation of apocalyptic technologies that require global invasive suppression, would all increase the likelihood of the emergence of a singleton. Bostrom leaves open the possibility that the singleton could evolve from liberal democratic self-governance and be accountable to human beings in an equal and transparent way. But the prospect of a radical improvement in the cognitive powers and moral characters of posthumans and machine minds has led transhumanists like Yudkowsky to advocate for humanity to abdicate self-governance to more enlightened successors. Yudkowsky has focused a lot of his writing on the problem of human cognitive biases. Yudkowsky, like other believers in a coming artificial intelligence “Singularity,” believes that human cognitive limitations will be quickly superceded by the super-rationality of a recursively self-improving artificial intelligence unconstrained by biology and evolutionary drives. Human brains, he argues, will never have the same capacity for self-improvement and perfect rationalization since machine minds will have “total read/write access to their own state,” the ability to “absorb new hardware,” “understandable code,” “modular design,” and a “clean internal environment” (Yudkowsky, 2008). In fact, argues Yudkowsky, human cognition is so irredeemably constrained by bias, and our motivations so driven by aggression and self-interest, that we should give up on the project of self-governance through rational debate and do our best to hasten the day when we can turn our affairs over to a super-rational artificial intelligence programmed to act in our best interests. In his 2004 essay “Coherent Extrapolated Volition” (CEV), Yudkowsky argues that a super-AI would be able to intuit the desires and needs of all human beings and make the decisions necessary to satisfy them. In this, Yudkowsky and his followers (unconsciously) echo Marxist-Leninist theories of scientific socialism and the perfect reflection of the general will through the Party.
As described by Kaj Sotala in a refutation of fourteen objections to Yudkowsky’s theory of “friendly AI”: In the CEV proposal, an AI will be built ... to extrapolate what the ultimate desires of all the humans in the world would be if those humans knew everything a superintelligent being could potentially know; could think faster and smarter; were more like they wanted to be (more altruistic, more hard-working, whatever your ideal self is); would have lived with other humans for a longer time; had mainly those parts of themselves taken into account that they wanted to be taken into account. The ultimate desire—the volition—of everyone is extrapolated, with the AI then beginning to direct humanity towards a future where everyone’s volitions are fulfilled in the best manner possible… Humanity is not instantly “upgraded” to the ideal state, but instead gradually directed towards it. (Sotala, 2007) The masses labor under “false consciousness,” unaware of their true interests which can only be revealed through submitting to the tutelage of the scientific dictatorship. At the end of Yudkowsky’s original 2004 essay, he asks, “What if someone disagrees with the CEV?” to which he answers: Imagine the silliness of arguing with your own extrapolated volition. It’s not only silly, it’s dangerous and harmful; you’re setting yourself in opposition to the place you would have otherwise gone… (Yudkowsky, 2004) Any objection to rule by this godlike AI is based on anthropocentric projections of the fallibility of human despotism. As Michael Anissimov explains it, enlightened AI despotism will be completely trustworthy. In fact, he suggests, only godlike AI, built from pure code and free of evolved Darwinian behaviors but somehow programmed for human friendliness, can be trusted as a global totalitarian singleton:
The fear of patriarchy objection stems largely from history, wherein all of the relevant actors were members of our unique species, for which power is proven to corrupt. Power corrupts humans for evolutionary reasons—if one is on top of the heap, one had better take advantage of the opportunity to reward one’s allies and punish one’s enemies. This is pure evolutionary logic and need not be consciously calculated. AIs, which can be constructed entirely without selfish motivations, can be immune to these tendencies. Insofar as significant power asymmetries in general bother people, this seems hard to avoid in the long term—technological development will lead to a diversity of possible beings, and with this diversity will inevitably come a diversity in levels of capability and intelligence. (Anissimov, 2007) Dictatorship by friendly AI is by no means the only form of incipient illiberal and anti-democratic theory possible or extant among transhumanists. As the transhumanist movement grows, there will undoubtedly be a growing conflict between transhumanist defenders of democratic self-governance and advocates of enlightened technocracy. Russian transhumanists, for instance, include both radical liberals as well as supporters of Putin’s authoritarianism. Just as Chinese advocates for market liberalization are divided between political liberals and defenders of the wise stewardship of the Chinese Communist Party, we are likely to see Chinese enthusiasts for human enhancement divided over the virtues of state-mandated eugenics. In response, we defenders of liberal democracy need to marshal our arguments for the virtuous circle of reinforcement between human technological enablement and self-governance. In Citizen Cyborg, for instance, I argue that cognitive liberty, bodily autonomy, and reproductive freedom are core Enlightenment and transhumanist values, not to be lightly trumped by corporate power and state projects for betterment. I argue that cognitive enhancement, assistive artificial intelligence, and electronic communication all would strengthen the ability of the average citizen to know and pursue their own interests and would make liberal democracy increasingly robust. I also argue against a pessimistic view that transhumanists are a permanent minority, and make the case that political majorities can be won for a technoprogressive platform. A faith in the possibility of progress through liberal democracy is certainly difficult to sustain in the wake of the failure of a Democratic super-majority to pass health care reform in the United States, the collapse of meaningful climate change negotiations, the hand-wringing impotence of international institutions to intervene against genocide and the proliferation of weapons of mass destruction, and the persistence of myriad forms of popular ignorance and superstition. If I could convince myself that turning our fate over to the enlightened despotism of HAL or Khan Noonien Singh was the only way forward I also would be tempted. I am certainly looking forward to new forms of governance that satisfy my Enlightenment values better than do the existing forms of imperfect liberal democracy. For now, however, I think transhumanists need to focus on achieving our better world through liberal democracy.
Article 1. Men are born and remain free and equal in rights. Social distinctions may be founded only upon the general good. Declaration of the Rights of Man and of the Citizen Approved by the National Assembly of France, August 26, 1789 The Enlightenment argued for moral universalism, the view that ethics and law should apply equally to all men. Enlightenment thinkers were not the first or only philosophers to propose moral universalism. Arguments for an obligation to respect the dignity of all people regardless of status can be found in Eastern and Western religious ethics and Greek philosophy. Even within the Enlightenment, there were several varieties of argument for the legitimacy of universal, equal rights. Locke argued for universal rights on the grounds that in the human state of nature, as created by God before civilization, we were given possession of our bodies. Therefore all humans possess these natural rights equally, and Interference with individual rights violates a natural and divine law. This was the logic of Thomas Jefferson’s statement in the Declaration of Independence: We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights… The assertion of moral realism was never consistent, however, with the Enlightenment’s empiricism. Where do we find evidence of God’s imbuing of all humans with rights? How can we tell which rights they were imbued with? How can we adjudicate between religious just-so stories that grant or deny rights to peasants and women? So, other Enlightenment thinkers made social contractarian arguments for moral universalism, arguments more consistent with Enlightenment empiricism. The utilitarians, for instance, argued for moral universalism on the grounds that if we accept that all creatures want less suffering, the goal of morality should be the reduction of all creatures’ suffering, regardless of race, gender, or even species. Almost immediately, the declaration of universal rights generated demands to end slavery and the subordination of women, and the universalist meme spread and unfolded through the politics of the next two hundred years from the Haitian slave revolt through to the demands of sexual minorities and the disabled today. The 1948 adoption of the United Nations’ Universal Declaration of Human Rights was a milestone in the institutionalization of Enlightenment universalism. Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world… Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom… Now, Therefore the General Assembly proclaims THIS UNIVERSAL DECLARATION OF HUMAN RIGHTS as a common standard of achievement for all peoples and all nations… In response, the Counter-Enlightenment has always attacked moral universalism on two flanks. On the one hand, religious conservatives and moral realists, from the Vatican to neo-Confucianism, have asserted that there were “real” distinctions between the rights and duties of men and women, propertied and propertyless, European and non-European, etc., that the universal rights of Enlightenment ignored. Other conservative thinkers however, like Edmund Burke, argued that if we acknowledge the existence of any rights it is because they are rooted in particular cultures and traditions. Therefore rights cannot be universal, and it makes no sense to defend the right to free speech of the Chinese or the African. In fact, the Enlightenment actually threatened the local, embedded rights that people do possess because its universalism ignored the importance of local culture, seeking to overturn national traditions in favor of global cosmopolitanism. After World War Two, postmodernist intellectuals adopted their own critique of moral universalism and defense of local embeddedness. Enlightenment universalism, they claimed, has been used as a rationale for imperialism and the suppression of local cultures and laws. In situations where local cultures violated the rights of women or ethnic minorities, or suppressed free speech, moral universalism was in conflict with Enlightenment values of respect for self-determination and cultural diversity. Moral relativism is thus both an external, counter-Enlightenment strain of thought, and an internal and consistent product of one line of Enlightenment reasoning. A second problem internal to the Enlightenment tradition of rights was determining what characteristics are necessary for a person to be acknowledged as a possessor of rights. There were debates internal to the Enlightenment tradition over the rights of slaves, women, children, non-citizens, and animals. If rights were a recognition of a universal moral status derived from specific capacities for thought and feeling, then which groups of creatures possessed these faculties and which didn’t? Children, for instance, progressed from a point at which the only rights they could reasonably be argued to possess were the rights to life and to not suffer unnecessarily, to adulthood where they came into full possession of their rights. Denying the right to vote or make contracts to children was therefore consistent with moral universalism. On the same grounds, some argued that slaves, women, and animals lacked these capacities and thus their attendant rights, while others argued that they possessed them.
Transhumanist Universalism vs. Transhumanist Relativism
Transhumanists are likewise caught between ethical universalism and relativism, and conflicted about who exactly the circle of moral universalism and equal legal citizenship should extend to. Most transhumanists are certainly universalist in their assertion of the rights of all people to control their own bodies and minds, and to take advantage of technological enablement. But most transhumanists reject the idea of some objective universal morality or natural law foundation for human rights. Most are also wary of transnational institutions that might come to suppress the existing hard-won rights enjoyed in Western countries. As a consequence, many transhumanists are unwilling to endorse the enforcement of universal human rights standards by transnational institutions. Most transhumanists also hesitate at the idea that humanity 1.0 should attempt to constrain the moral choices of our descendants. If our descendants evolve morally and intellectually, then our attempt to influence them would be as foolish as our Paleolithic ancestors attempting to ensure we did not deviate from their values. Arguing against relativism often starts from whether the relativist refuses to even pass judgment on genocide, and we have to ask the same thing of relativist transhumanists. What if posthumans decide to enslave unenhanced humans, treating us like we treat children or animals? Isn’t it morally obligatory that we do what we can to ensure future legal equality and racial harmony between humans and posthumans? Some transhumanists argue that it is possible to defend a transhuman version of moral universalism that enforces equal basic rights for both humans and posthumans. Allen Buchanan (2009), for example, argues that moral status is a threshold that won’t move as humans enhance. Political rights, however, aren’t directly tied to moral status, and it is possible to imagine a transhuman society that accepts the moral equality of humans and posthumans but accords them different political rights. For instance, in Citizen Cyborg I argue that just as we currently formally acknowledge the different capacities and rights of adults without violating universalism, we could protect the basic equality of the enhanced and unenhanced while carefully acknowledging their differences. To drive cars, fly planes, possess weapons, and hold certain occupations, we oblige people to take specific courses of education, testing, and licensure, and then subject them to special rules and obligations. It is possible to imagine that some cognitive and physical powers would be so dangerous that we would similarly require licensure for their possession. Just as people who own monster trucks and automatic weapons have not established themselves as a dictatorial aristocracy in democratic societies, careful regulation of enhancements could diminish threats to legal and political equality. Other transhumanists believe, however, that posthumans’ vast superiority in power, cognition, and moral progress will make pet-like subordination the best of outcomes for humans. Some transhumanists hope that human coexistence with our posthuman descendants will be a moot issue since posthumans will want to leave Earth altogether. In a 2005 survey of transhumanists, only a plurality (46%) agreed that “humans and post humans will be able to coexist in one society and polity,” while 41% were unsure, and 12% believed they could not coexist. Allen Buchanan (2009) rejects George Annas’ (2005) assertion that post-humans will inevitably carry out genocide against humans, but acknowledges posthuman authoritarianism as a “practical worry.” Instead of a carefully regulated acknowledgment of different rights and obligations (like the right to drive) which preserve political equality, Buchanan sees a serious risk of posthumans insisting that their superior mental powers warrant greater political powers. David Hume (1777) also came to a pessimistic conclusion about the coexistence of citizens with vastly different powers in An Enquiry Concerning the Principles of Morals. Were there a species of creatures, intermingled with men, which, though rational, were possessed of such inferior strength, both of body and mind, that they were incapable of all resistance, and could never, upon the highest provocation, make us feel the effects of their resentment; the necessary consequence, I think, is, that we should be bound, by the laws of humanity, to give gentle usage to these creatures, but should not, properly speaking, lie under any restraint of justice with regard to them, nor could they possess any right or property, exclusive of such arbitrary lords. Our intercourse with them could not be called society, which supposes a degree of equality; but absolute command on the one side, and servile obedience on the other. In other words, even if the humans 1.0 were capable of rationality, of being asked for their consent for decisions that affect them, when the gap in cognitive ability and political power becomes too great, Hume proposes, legal equality becomes impossible. Even under the optimistic scenario that posthumans feel a sense of nostalgic noblesse oblige towards ur-humans, they might find claims that humans should give consent as pointless as asking the permission of a child or an animal (a scenario Dan Wikler considers in his 2009 essay “Paternalism in the Age of Cognitive Enhancement”). This might especially be the case if the benefit to be conferred was cognitive enhancement that enabled us to understand the importance of the benefit, and only then to exercise meaningful self-determination.
These are not abstract questions, but practical challenges we face day-to-day from clinical ethics to international law. With the mentally ill and brain damaged, we have to carefully parse when coercion is required in the subject’s best interest. Coercing someone with mental illness to take their meds may return them to self-determination. In foreign policy, we face similar questions about the overthrow of dictatorships and the “imposition” of democracy. Only the most extreme moral absolutist would insist there is never a circumstance when it is morally obligatory to coerce someone toward their own freedom. The transhumanist debate over animal “uplift”—a term coined and given narrative flesh by new IEET fellow David Brin—has indirectly addressed this conundrum. (The discussion was carried out for instance on the technoliberation list in 2006.) In Citizen Cyborg, I argued that great apes had cognitive and emotional capacities sufficiently close to human that they should enjoy basic human rights, the position argued by the Great Ape Project. But apes are cognitively and therefore morally like human children in that they cannot meaningfully be asked for or provide consent to decisions that affect them. As with children, I argue, we have an obligation to provide apes the means to reach cognitive maturity, through pharmacological, genetic, and nanotechnological cognitive enhancement, so that they can exercise full self-determination. In response, critics such as Dale Carrico asserted that the project was a form of eugenics and cultural imperialism, forcing a human model of the Good on other species. Whether humans have a right to insist on universal respect for human rights or not, he claimed moral universalism does not extend across species boundaries. Great apes should not be forced to adopt human cognition. George Dvorsky and I, following Peter Singer, argued that species is morally irrelevant.
Are Transhumanist Values Universal or Parochially Human?
To what extent are transhumanist values inescapably human? In 2007, Nicholas Agar responded to Nick Bostrom’s “In Defense of Posthuman Dignity” by arguing that while there are some human moral universals, such as the moral status of human and posthuman persons, transhumanism is actually espousing values that are rooted in the human experience. Some of our values are universal. When we identify them as such we say that they are values for everyone. Good examples are core moral values. One’s moral status should not depend on who is making the judgment. You are a morally considerable being irrespective of whether your spouse or a complete stranger is asking the question. Other values are local. They depend on who is judging. The values we place on family and friends are to a large extent local. A parent can expect that you recognize the moral considerability of her child, but she should not expect you to value him just as she does…. much of the value we place on our own humanity is local. I value humanity because I’m human. I wouldn’t trade my humanity for post-humanity even though I recognize that posthumans are objectively superior. Its being a local value means that I do not expect the value that I place on humanity to be accessible to posthumans, just as, pace Bostrom, posthuman values aren’t available to me. Agar understands that even if some people do value being human, this does not place a moral obligation on those who do not value humanness and wish to pursue enhancement. But Agar feels that those who desire enhancement are actually rooted in local, human values and not in universal transhuman values.
Transhumanists take pride in achievements that are meaningless except by reference to humanity. I imagine that they take pleasure in writing fine books defending transhumanism rather than feeling annoyance they weren’t able to ask a time-traveling posthuman to give the subject a far superior treatment. Nonetheless, Agar sees other core transhumanist values, such as the value on health and longevity, as universal. He believes that the bioconservative defense of illness and death as central to human experience confuses the local value on humanness for the universal value on health and life. It would be callous to retain pain and suffering if we could eliminate them so that the fortunate among us can overcome and emerge with our characters deepened. We can avoid making a brief in favor of pain and suffering by advocating the elimination of horrible diseases as a universal value. This means recognizing that the dominant effect of metastatic cancer is to thwart human flourishing rather than to deepen the characters of onlookers and occasional survivors. The universal value of preventing and curing disease does not seem to be inconsistent with the local value of humanity. There doesn’t seem to be anything spookily posthuman about someone who makes it through to a ripe old age without having succumbed to cancer.
For a Postmodern Transhuman Moral Universalism
Transhumanists, especially of the libertarian variety, have retreated too far from Enlightenment moral universalism, towards moral relativism. We need to reassert our commitment to moral universalism and the political project of equality for all persons and institutions of global governance powerful enough to enforce world law and individual rights. Certainly, as Russell Blackford and Claudio Corradetti (2009) have argued, our moral universalism 2.0 needs to be more sophisticated. We partisans of the Enlightenment cannot defend moral universalism by re-asserting that rights are God-given, natural, or self-evident. We have to acknowledge that rights and moral status are social agreements, shifting daily with the balance of political forces seeking to limit and expand them. Moral universalism needs to be tempered with respect for diversity and, where meaningful, respect for individual consent and collective self-determination. Our moral universalism needs to acknowledge the limits of our current perspective, the possibility that some of our universals may in fact be parochially human, and that our descendants may come up with better ethical and political models. But for today, just as we should not shy from working to stop Iranian torture of prisoners, Chinese net censorship, or Sudanese ethnic cleansing over their objections of “that’s how we do it here,” we should be actively promoting a common standard of moral obligation across species boundaries, animal, human, and posthuman.
Along with the value of Reason, the Enlightenment thinkers shared a faith in the inevitability of human Progress. In fact Enlightenment thinkers portrayed themselves as having invented the idea of human progress, portraying all pre-Enlightenment views of history as static or cyclical (Bury, 1920). Historians have disputed the novelty of the Enlightenment faith in progress, pointing to precedents in pre-Enlightenment thought (Nisbet, 1979). In fact, the faith in progress appears to be more of a secularization of Christian eschatology than its repudiation (Becker, 1932). Nonetheless there is a clear difference between Enlightenment beliefs in continual, linear political, intellectual, and material improvement, and the dominant Christian historical narrative in which little would change until the End Times and Christ’s return. Kant (1784), for instance, in his Idea of a Universal History from a Cosmopolitical Point of View, argued for the inevitable progress and moral perfection of man on religious grounds: All capacities implanted in a creature by nature are destined to unfold themselves completely and conformably to their end, in the course of time…The history of the human race, viewed as a whole, may be regarded as the realization of a hidden plan of nature to bring about a political constitution, internally, and, for this purpose, also externally perfect, as the only state in which all the capacities implanted by her in mankind can be fully developed. (quoted in Nisbet, 1979) Another famous statement of the inevitability of progress was written by our proto-transhumanist Enlightenment hero the Marquis de Condorcet in his 1795 Sketch for a Historical Picture of the Progress of the Human Mind. In this last monograph, Condorcet expresses his conviction that humanity will eventually conquer all oppression, inequality, ignorance, and even death and the need to toil. Human progress has evolved through nine stages with the current tenth stage being one of complete liberation of the human possibility. Such is the aim of the work that I have undertaken, and its result will be to show by appeal to reason and fact that nature has set no term to the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us. This progress will doubtless vary in speed, but it will never be reversed as long as the earth occupies its present place in the system of the universe, and as long as the general laws of this system produce neither a general cataclysm nor such changes as will deprive the human race of its present faculties and its present resources… It is reasonable to hope that all other diseases may likewise disappear as their distant causes are discovered. Would it be absurd then to suppose that this perfection of the human species might be capable of indefinite progress; that the day will come when death will be due only to extraordinary accidents or to the decay of the vital forces, and that ultimately the average span between birth and decay will have no assignable value?... Condorcet believed that humanity’s progress could be predicted as certainly as natural phenomena. This Enlightenment faith in the inevitability of political and scientific progress continues down through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism. Even Darwinism’s theory of natural selection was suborned to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation: As all the living forms of life are the lineal descendants of those which lived long before the Silurian epoch, we may feel certain that the ordinary succession by generation has never once been broken, and that no cataclysm has desolated the whole world. Hence we may look with some confidence to a secure future of equally inappreciable length. And as natural selection works solely by and for the good of each being, all corporeal and mental environments will tend to progress towards perfection. (Darwin, 1859) But this belief in the historical inevitability of progress has also always been in conflict with the rationalist, scientific observation that humanity could regress or disappear altogether. Enlightenment pessimism or at least realism has dogged the heals of Enlightenment optimism. Henry Vyberg (1958) illustrated that there were French Enlightenment thinkers who did not believe in a linear historical progress, but proposed historical cycles or even decadence instead. Rousseau is general seen as having believed in the superiority of the “savage” over the civilized. Vico and Montesquieu believed all civilizations were subject to cycles of progress and decay. In D’Alembert’s Dream Denis Diderot (1769) muses that humanity could regress to inertia, or into a Borg-anism, as easily as it could progress into a society of free individuals. Who knows if everything isn’t tending to reduce itself to a large, inert, and immobile sediment? Who knows how long this inertia will last? Who knows what new race could result some day from such a huge heap of sensitive and living points? Why not a single animal? … Watch out for the logical fallacy of the ephemeral…when a transitory being believes in the immortality of things. (Diderot, 1769) Certainly the theory of natural selection provides no support for “progress,” only that humanity, like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done. As Thomas Henry Huxley noted in 1888 in The Struggle for Existence in Human Society: It is an error to imagine that evolution signifies a constant tendency to increased perfection. That process undoubtedly involves a constant remodelling of the organism in adaptation to new conditions; but it depends on the nature of those conditions whether the directions of the modifications effected shall be upward or downward. Faith in the inevitability of progress has waxed and waned with historical events. It can be found in New Age beliefs that the world is headed for a millennial age, and in techno-optimist futurism. But since the rise and fall of fascism and communism, the implosion of New Left and countercultural utopianism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress. The transhumanist community, however, is a community where many still have such a faith.
Transhumanist Optimism vs. Future Uncertainty
Transhumanists have inherited the tension between Enlightenment convictions in the inevitability of progress, and Enlightenment’s scientific, rational realism that human progress or even civilization may fail. In the 1990s, transhumanists were full of exuberant Enlightenment optimism about unending progress. For instance, Max More’s 1998 Extropian Principles defined “Perpetual Progress” as the first precept of their brand of transhumanism: Seeking more intelligence, wisdom, and effectiveness, an indefinite lifespan, and the removal of political, cultural, biological, and psychological limits to self-actualization and self-realization. Perpetually overcoming constraints on our progress and possibilities. Expanding into the universe and advancing without end. (More, 1998) For More himself, this principle was more a normative goal than a faith in historical inevitability. In 2002 he said, for instance: ....extremely fast phase change from human to transhuman to posthuman appears as a highly likely scenario. I do not see it as inevitable. It will take vast amounts of hard work, intelligence, determination, and some wisdom and luck to achieve. It’s possible that some humans will destroy the race through means such as biological warfare. Or our culture may rebel against change, seduced by religious and cultural urgings for “stability,” “peace” and against “hubris” and “the unknown.”...History since the Enlightenment makes me wary of all arguments to inevitability… (More and Kurzweil, 2002) Similarly for Greg Burch in his “Progress, Counter-Progress, and Counter-Counter-Progress” address to the final, 2001 conference of the Extropians, the Enlightenment and transhumanist commitment to progress is to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable: ...we are poised to continue the program of the Enlightenment, now with a full set of tools only imagined by its founders. Unfortunately, in this last three centuries, the enemies of progress have had time to prepare their positions for this renewal of progress outside of the purely scientific and technological realms….opposition to the core notion of humane progress should give us cause for deep concern. As my graphical depiction of those who stand opposed to continuing with the program of the Enlightenment demonstrates, we are in a very real sense completely encircled in the cultural, social and political realms.(Burch, 2001) Nonetheless, for many extropians and transhumanists perpetual progress was an unstoppable train; one either got on board for transcension or consigned oneself to the graveyard. Greg Stock’s 1993 Metaman: The Merging of Humans and Machines into a Global Superorganism, for instance, harked back to Condorcet’s conviction that the spread of global commerce and communication would lead humanity to an inevitable quickening of consciousness. A few transhumanists such as John Smart (2008) even linked this historical teleology to religious eschatologies such as Teilhard de Chardin’s belief that humanity would converge into a divine Omega Point. Since the 2000 dot-com crash, however, transhumanists have increasingly tempered their expectations about progress. While some transhumanists still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology. One influential example of this anti-millennial realism is Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” which sketched out the “bangs,” “crunches,” “shrieks,” and “whimpers” that could end human existence. He specifically includes not just scenarios that wipe out the species, but also scenarios in which we gradually evolve into dead-ends, like H.G. Wells’ Eloi and Morlocks in The Time Machine. In other words, Bostrom addresses not just how we can ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them. Subsequently, Bostrom began work on catastrophic risk estimation at the Future of Humanity Institute at Oxford, and edited the 2008 Global Catastrophic Risks volume with the transhumanist astrophysicist and IEET fellow Milan Circovic. Catastrophic risk is also a programmatic focus for the Institute for Ethics and Emerging Technologies and for the transhumanist non-profit, the Lifeboat Foundation. Bostrom has urged transhumanists to be more critical of technological progress, since: ...it is far from a conceptual truth that expansion of technological capabilities makes things go better. [And] even if empirically we find that such an association has held in the past (no doubt with many big exceptions), we should not uncritically assume that the association will always continue to hold. (Bostrom 2009) The tension between eschatological certainty and pessimistic risk assessment has played out in the debate over the Singularity. Ray Kurzweil (2005) staunchly defends the unstoppability of his accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence by pointing to the steady exponential march of technological progress through wars and depressions. He gives little weight to the dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines, suggesting that we will merge with them into apotheosis.
Technoprogressive Optimism of Will, Pessimism of the Intellect
The IEET has been a site for teasing out this tension between “optimism of the will and pessimism of the intellect,” as Antonio Gramsci framed it. On the one hand, we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress. On the other hand, we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society. Technoprogressivism is an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable. For instance, in 2005 we published Dale Carrico’s essay “Progress as a Natural Force Versus Progress as the Great Work.” ...there is all the difference in the world between those who profess to believe in progress and those who would work to achieve it. When progress is imagined to be some kind of force that the knowledgeable can discern in history, a natural force in which one can believe with ones whole heart or to which profess ones full faith, or, better yet, a force in the name of which one can claim to be some kind of priestly mouthpiece, then it tends to be little more than a self-congratulatory fable that the powerful and their orbiting opportunists tell themselves to deny the part luck has played in their attainment of power and then to justify the bad behavior they typically employ subsequently to maintain it… And for those who are swept up in the exhilaration of some particular narrative of natural progress it is likewise difficult to see past the mandate of inevitability it confers, difficult to perceive the winning streak it celebrates as one that can ever come to an end, that the players it extols can ever lose their way, that the forces it documents can ever peter out. While it is easy to find examples of this kind of naturalizing idea of progress in the crass champions of Empire from the Edwardian English to the Project for a New American Century, offered up as a slightly less obvious example something that strikes closer to home (for me, at any rate): the kind of corporate futurists and science fiction fanboys who sometimes like to glibly handwave about the inevitable consequences of accelerating technological development. In 2008, “Millennial Tendencies in Responses to Apocalyptic Threats,” was published as an essay in Nick Bostrom and Milan Cirkovic’s Global Catastrophic Risks. That essay argued that millennialism was a psycho-cultural dynamic found throughout world history, in many different civilizations, including in contemporary secular technomillennialism. Identified four characteristic cognitive biases that millennialism generated: over-optimistic expectation of the inevitability of utopia, over-pessimistic expectation of the certainty of apocalypse, fatalism about the irrelevance of human effort to effect the outcome, and misplaced messianic beliefs about the magical efficacy of particular individuals or actions to avoid apocalypse and ensure utopia. Is proposed that Kurzweilian Singularitarianism was a manifestation of millennial over-optimism and fatalism, while people like Hugo de Garis, certain that AI would eventually cause “mega-deaths,” represented over-pessimism and fatalism. Among some followers of the Singularity Institute for Artificial Intelligence, on the other hand, one can find magical messianic thinking about the importance of certain activities that will supposedly create friendly AI, preventing apocalypse and ensuring utopia. In December 2009 we published Phil Torres’s “Transhumanism, Progress and the Future”, which again critiqued transhumanism’s belief in inevitable progress. Verdoux offers three critiques of transhumanist and Enlightenment faith in progress: futurological, historical, and anthropological. The futurological argument is that our technological capabilities are exponentially increasing our capability to wipe ourselves out. The historical argument is that transhumanists tend to cherry pick the signs of progress, ignoring both signs of stagnation and evidence that “progress” is creating the problems it purports to solve such as cures for cancers that were caused in turn by toxins. The anthropological argument is that pre-moderns were probably as happy or happier than we moderns. Verdoux goes on to argue for transhumanism on moral grounds and as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside. In this, Verdoux is very similar to the 21st century Left, arguing for egalitarianism and radical democracy on moral grounds but without any of Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism. Unfortunately, the “rational capitulationism” to the transhumanist future that Verdoux offers, like the managerial centrism of contemporary social democracy, is not something that stirs men’s souls. We need to embrace these critical, pessimistic voices and perspectives, but also re-discover our capacity for vision and hope. In 2009, at Nick Bostrom’s urging, the Board of Directors of Humanity+ adopted a new version of the Transhumanist Declaration which replaced this 1998 language: Transhumanists think that by being generally open and embracing of new technology we have a better chance of turning it to our advantage than if we try to ban or prohibit it…. In planning for the future, it is mandatory to take into account the prospect of dramatic technological progress. It would be tragic if the potential benefits failed to materialize because of ill-motivated technophobia and unnecessary prohibitions. On the other hand, it would also be tragic if intelligent life went extinct because of some disaster or war involving advanced technologies. With these lines: We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded. Voting in favor—while serving as members of the Humanity+ Board of Directors—were myself and IEET managing director Mike Treder, IEET Board members Nick Bostrom, George Dvorsky, and Michael LaTorra, and IEET Fellow Ben Goertzel. One of the motivations behind the creation of the “technoprogressive” brand has been to distinguish Enlightenment optimism about the possibility of human political, technological and moral progress from millennialist techno-utopian inevitabilism. Without optimism that humans can collectively exercise foresight and invention, and peacefully deliberate our way to a better future, we too easily fall into the traps of utopian or apocalyptic fatalism, or fixation on techno-fixes and dei ex machinae.
Remaining always mindful of the myriad ways that our indifferent universe threatens our existence and how our growing powers come with unintended consequences is the best way to steer towards progress in our radically uncertain future.
Too each their own, but here I have complied a list of the sometimes unspoken Truths of how controversial Trans-humanism ideals truly are. Let me know what you think in the comment section below.