We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Verging on the realm of science fiction, my question is that is there any theoretically possible way, biologically/chemically, with which the entire human race can be killed without affecting the rest of the biosphere at all? I am only curious about theoretically possible no matter how unlikely. If we encounter a sufficiently advanced alien civilization bent on our destruction, would it be possible for them to make sure all humans die without harming any other living organism on our planet?
Is there anything biological/chemical which makes us unique or different than other lifeforms on Earth? Something which can be used against us? Or are we just so similar with a bunch of other species that we cannot be wiped out without other lifeforms getting harmed or our destruction cannot be ensured? Is it possible to artificially make humans extinct?
Infectious diseases is probably the only way I can think of. Viruses for example may have tropisms based on receptors to which they attach. These receptors may be and often are unique to a species, and thus an infection could target humans very specifically. Differences between humans and animals present the weaknesses and resistances we would have to such an infection. As an international species, spread of the disease can be quite rapid. However, our increased intelligence means we could given time, understand the threat and combat it.
If an infectious disease killed rapidly enough that development of a cure or administration of one was not easily possible, the disease itself would be wiped out by quarantine measures (theoretically). For an infection it is not wise to kill the host too quickly as this prevents survival of the infection which is dependent on its host.
Thus we want an infection that is non-pathological at first then rapidly pathological but affected the whole world as a whole. This infection would need to spread to every individual so would have to be incredibly infectious and for no individuals to have any resistance to it and also ideally to kill everyone at roughly the same time or at least quickly enough to prevent us having time to come up with a cure. The only way I can think of that would fulfil all of those conditions is if the infection on it's own was non-pathological but then a novel secondary universal factor made this infection pathological. This factor would have to be in low levels prior to infection but then rapidly escalate to worldwide distribution.
Disclaimer: Obviously this is theoretical and probably not at all possible.
Trick question. By driving humans to extinction, you almost certainly will be driving all human-specific parasites and pathogens to extinction as well.
This answer is building on the answer by @BrandonInvergo and focusing on the "… possible way… the entire human race can be killed without affecting the rest of the biosphere… " part of your question.
No, it is impossible to remove humans without affecting the biosphere (barring some extremely made-up scifi scenario) because many species are adapted to us. It doesn't stop with the parasites and pathogens in Brandon's answer. Most obviously, the majority of all domesticated animals and plants will go extinct, and these number ~750 species (Duarte et al 2007). A large number of other species (e.g. plants, insects, birds) are also adapted to man-made environments and many are likely to go extinct if humans disappear, and these effects will cascade into their respective ecosystems. Just look at what is happening to species adapted to traditional low-intensive pasturelands in e.g. Europe.
Human extinction is the hypothetical complete end of the human species. This may result either from natural causes or due to anthropogenic (human) causes, but the risks of extinction through natural disaster, such as an asteroid impact or large-scale volcanism, are generally considered to be comparatively low.  Anthropogenic human extinction is sometimes called omnicide.
Many possible scenarios of anthropogenic extinction have been proposed, such as climate change, global nuclear annihilation, biological warfare and ecological collapse. Some scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots. The probability of anthropogenic human extinction within the next hundred years is the topic of an active debate.
Human Actions and the Sixth Mass Extinction
Over 99 percent of all species that ever lived on Earth have gone extinct. Five mass extinctions are recorded in the fossil record. They were caused by major geologic and climatic events. Evidence shows that a sixth mass extinction is occurring now. Unlike previous mass extinctions, the sixth extinction is due to human actions.
Some scientists consider the sixth extinction to have begun with early hominids during the Pleistocene. They are blamed for over-killing big mammals such as mammoths. Since then, human actions have had an ever greater impact on other species. The present rate of extinction is between 100 and 100,000 species per year. In 100 years, we could lose more than half of Earth&rsquos remaining species.
Causes of Extinction
The single biggest cause of extinction today is habitat loss. Agriculture, forestry, mining, and urbanization have disturbed or destroyed more than half of Earth&rsquos land area. In the U.S., for example, more than 99 percent of tall-grass prairies have been lost. Other causes of extinction today include:
- Exotic species introduced by humans into new habitats. They may carry disease, prey on native species, and disrupt food webs. Often, they can out-compete native species because they lack local predators. An example is described in Figurebelow.
- Over-harvesting of fish, trees, and other organisms. This threatens their survival and the survival of species that depend on them.
- Global climate change, largely due to the burning of fossil fuels. This is raising Earth&rsquos air and ocean temperatures. It is also raising sea levels. These changes threaten many species.
- Pollution, which adds chemicals, heat, and noise to the environment beyond its capacity to absorb them. This causes widespread harm to organisms.
- Human overpopulation, which is crowding out other species. It also makes all the other causes of extinction worse.
The brown tree snake is an exotic species that has caused many extinctions on Pacific islands such as Guam.
Effects of Extinction
The results of a study released in the summer of 2011 have shown that the decline in the numbers of large predators like sharks, lions and wolves is disrupting Earth's ecosystem in all kinds of unusual ways. The study, conducted by scientists from 22 different institutions in six countries, confirmed the sixth mass extinction. The study states that this mass extinction differs from previous ones because it is entirely driven by human activity through changes in land use, climate, pollution, hunting, fishing and poaching. The effects of the loss of these large predators can be seen in the oceans and on land.
- Fewer cougars in the western US state of Utah led to an explosion of the deer population. The deer ate more vegetation, which altered the path of local streams and lowered overall biodiversity.
- In Africa, where lions and leopards are being lost to poachers, there is a surge in the number of olive baboons, who are transferring intestinal parasites to humans living nearby.
- In the oceans, industrial whaling led a change in the diets of killer whales, who eat more sea lions, seals, and otters and have dramatically lowered the population counts of those species.
The study concludes that the loss of big predators has likely driven many of the pandemics, population collapses and ecosystem shifts the Earth has seen in recent centuries.
Around the world, frogs are declining at an alarming rate due to threats like pollution, disease, and climate change. Frogs bridge the gap between water and land habitats, making them the first indicators of ecosystem changes.
Scoop a handful of critters out of the San Francisco Bay and you'll find many organisms from far away shores. Invasive kinds of mussels, fish, and more are choking out native species, challenging experts around the state to change the human behavior that brings them here.
How You Can Help Protect Biodiversity
There are many steps you can take to help protect biodiversity. For example:
- Consume wisely. Reduce your consumption wherever possible. Re-use or recycle rather than throw out and buy new. When you do buy new, choose products that are energy efficient and durable.
- Avoid plastics. Plastics are made from petroleum and produce toxic waste.
- Go organic. Organically grown food is better for your health. It also protects the environment from pesticides and excessive nutrients in fertilizers.
- Save energy. Unplug electronic equipment and turn off lights when not in use. Take mass transit instead of driving.
Why is the salmon population of Northern California so important? Salmon do not only provide food for humans, but also supply necessary nutrients for their ecosystems. Because of a sharp decline in their numbers, in part due to human interference, the entire salmon fishing season off California and Oregon was canceled in both 2008 and 2009. The species in the most danger of extinction is the California coho salmon.
Artificial Ape Man: How Technology Created Humans
Darwin is one of my heroes, but I believe he was wrong in seeing human evolution as a result of the same processes that account for other evolution in the biological world - especially when it comes to the size of our cranium.
Darwin had to put large cranial size down to sexual selection, arguing that women found brainy men sexy. But biomechanical factors make this untenable. I call this the smart biped paradox: once you are an upright ape, all natural selection pressures should be in favour of retaining a small cranium. That's because walking upright means having a narrower pelvis, capping babies' head size, and a shorter digestive tract, making it harder to support big, energy-hungry brains. Clearly our big brains did evolve, but I think Darwin had the wrong mechanism. I believe it was technology. We were never fully biological entities. We are and always have been artificial apes.
So you are saying that technology came before humans?
The archaeological record shows chipped stone tool technologies earlier than 2.5 million years ago. That's the smoking gun. The oldest fossil specimen of the genus Homo is at most 2.2 million years old. That's a gap of more than 300,000 years - more than the total length of time that Homo sapiens has been on the planet. This suggests that earlier hominins called australopithecines were responsible for the stone tools.
Is it possible that we just don't have a genus Homo fossil, but they really were around?
Some researchers are holding out for an earlier specimen of genus Homo. I'm trying to free us to think that we had stone tools first and that those tools created a significant part of our intelligence. The tools caused the genus Homo to emerge.
How do we know the chipped stones were used as tools?
If you wanted to kill something or to defend yourself, you don't need a chipped stone tool - you can just pick up a rock and throw it. With chipped stone, something else is going on, something called "entailment": using one thing to make another. You're using some object to chip the stone into a particular shape with the intention of using it for something else. There's an operational chain - one tool entails another.
What were these tools used for?
Upright female hominins walking the savannah had a real problem: their babies couldn't cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female - higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn't survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don't last: things made from sinew, wood, leather and grasses.
How did the slings shape our evolution?
Once you have slings to carry babies, you have broken a glass ceiling - it doesn't matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb - they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment.
You write in the book that this led to a "survival of the weakest". What does this mean?
Technology allows us to accumulate biological deficits: we lost our sharp fingernails because we had cutting tools, we lost our heavy jaw musculature thanks to stone tools. These changes reduced our basic aggression, increased manual dexterity and made males and females more similar. Biological deficits continue today. For example, modern human eyesight is on average worse than that of humans 10,000 years ago.
Unlike other animals, we don't adapt to environments - we adapt environments to us. We just passed a point where more people on the planet live in cities than not. We are extended through our technology. We now know that Neanderthals were symbolic thinkers, probably made art, had exquisite tools and bigger brains. Does that mean they were smarter?
Evidence shows that over the last 30,000 years there has been an overall decrease in brain size and the trend seems to be continuing. That's because we can outsource our intelligence. I don't need to remember as much as a Neanderthal because I have a computer. I don't need such a dangerous and expensive-to-maintain biology any more. I would argue that humans are going to continue to get less biologically intelligent.
If you said to me, you can either have your toes cut off or your whole library destroyed, with no chance of ever accessing those works again, Iɽ say "take my toes" - because I can more easily compensate for that loss. Of course, you could get into a grisly argument over how much of my biology Iɽ give up before Iɽ say, "OK, take the Goethe!"
Is human technology really any different from, say, a bird's nest, a spider's web or a beaver's dam?
Some biologists argue that human culture and technology is simply an extension of biological behaviours and in that sense humans are like hermit crabs or spiders. That's an idea known as "niche adaptation". I see human technology as different because of the notion of entailment. A number of philosophers and social anthropologists have argued that the realm of artifice has its own logic - an idea that traces back to Kant's idea of the autonomy of the aesthetic realm. Philosophy, art history and paleoanthropology have to all come together for us to understand who we are.
The point is, the realm of artificial things - that is, technology - has a different generative pattern than the Darwinian pattern of descent with modification. People like to argue that you can apply Darwinian selection to, say, industrial design. That led Richard Dawkins to propose and Susan Blackmore to develop the "meme" idea - cultural analogues of genes that are not biological but they are still replicators and follow the basic logic of biological evolution.
I would argue that memes simply don't make sense. And the reason is that when you look at an artificial object like a chair, for instance, there is no central rule that defines it. There is no way to draw a definite philosophical boundary and say, here are the characteristics that are both necessary and sufficient to define a chair. The chair's meaning is linguistic and symbolic - a chair is a chair because we intend for it to be a chair and we use it in a particular way. Artificial objects are defined in terms of intention and entailment - and that makes artificial things very different from biological things.
People like Ray Kurzweil talk about an impending singularity, when technology will advance at such a rapid pace that it will become intelligent and the world will become qualitatively different. Do you agree?
I am sympathetic to Kurzweil's idea because he is saying that intelligence is becoming technological and I'm saying, that's how it's been from the start. That's what it is to be human. And in that sense, there's nothing scary in his vision of artificial intelligence. I don't see any sign of intentionality in machine intelligence now. I'm not saying it will never happen, but I think it's a lot further away than Kurzweil says.
Will computers eventually be able to develop their own computers that are even smarter than them, creating a sudden acceleration that leaves the biological behind and leaves us as a kind of pond scum while the robots take over? That scenario implies a sharp division between humans and our technology, and I don't think such a division exists. Humans are artificial apes - we are biology plus technology. We are the first creatures to exist in that nexus, not purely Darwinian entities. Kurzweil says that the technological realm cannot be reduced to the biological, so there we agree.
At the end of the book, you note that there is no "back to nature" solution to climate change. Does that mean our species was doomed from the start?
The point is, we were never fully biological entities, so there is no "nature" to go back to, for us. Wait, you might ask, what about people who "live in nature", people like the Aborigines in Tasmania? In fact, the Tasmanians used technology to adapt and survive and they might have done that for maybe another 40,000 years. The issue is that their type of technology - non-entailed - is not the way humans will survive in the final scenario. Ultimately we need major progress - because even without climate change, the sun is eventually going to blow up.
Now, you might think that's a ridiculously long time away, but that's the kind of ridiculous timescale palaeoanthropologists think about. I look back 4 million years and see our emergence and our evolution and then I look forward 4 million years because those are the timescales I'm used to. And in the long run, humans will go extinct if we can't get off this planet. The only way out, ultimately, is up. The Tasmanians didn't have the kind of technology that would lead them there, but we do.
Timothy Taylor is an archaeologist and anthropologist at the University of Bradford, UK. His book The Artificial Ape: How technology changed the course of human evolution is published by Palgrave Macmillan this month.
Humans will be extinct in 100 years says eminent scientistProfessor Frank Fenner
(PhysOrg.com) -- Eminent Australian scientist Professor Frank Fenner, who helped to wipe out smallpox, predicts humans will probably be extinct within 100 years, because of overpopulation, environmental destruction and climate change.
Fenner, who is emeritus professor of microbiology at the Australian National University (ANU) in Canberra, said homo sapiens will not be able to survive the population explosion and “unbridled consumption,” and will become extinct, perhaps within a century, along with many other species. United Nations official figures from last year estimate the human population is 6.8 billion, and is predicted to pass seven billion next year.
Fenner told The Australian he tries not to express his pessimism because people are trying to do something, but keep putting it off. He said he believes the situation is irreversible, and it is too late because the effects we have had on Earth since industrialization (a period now known to scientists unofficially as the Anthropocene) rivals any effects of ice ages or comet impacts.
Fenner said that climate change is only at its beginning, but is likely to be the cause of our extinction. “We’ll undergo the same fate as the people on Easter Island,” he said. More people means fewer resources, and Fenner predicts “there will be a lot more wars over food.”
Easter Island is famous for its massive stone statues. Polynesian people settled there, in what was then a pristine tropical island, around the middle of the first millennium AD. The population grew slowly at first and then exploded. As the population grew the forests were wiped out and all the tree animals became extinct, both with devastating consequences. After about 1600 the civilization began to collapse, and had virtually disappeared by the mid-19th century. Evolutionary biologist Jared Diamond said the parallels between what happened on Easter Island and what is occurring today on the planet as a whole are “chillingly obvious.”
While many scientists are also pessimistic, others are more optimistic. Among the latter is a colleague of Professor Fenner, retired professor Stephen Boyden, who said he still hopes awareness of the problems will rise and the required revolutionary changes will be made to achieve ecological sustainability. “While there's a glimmer of hope, it's worth working to solve the problem. We have the scientific knowledge to do it but we don't have the political will,” Boyden said.
Fenner, 95, is the author or co-author of 22 books and 290 scientific papers and book chapters. His announcement in 1980 to the World Health Assembly that smallpox had been eradicated is still seen as one of the World Health Organisation’s greatest achievements. He has also been heavily involved in controlling Australia’s feral rabbit population with the myxomatosis virus.
Professor Fenner has had a lifetime interest in the environment, and from 1973 to 1979 was Director of the Centre for Resource and Environmental Studies at ANU. He is currently a visiting fellow at the John Curtin School of Medical Research at the university, and is a patron of Sustainable Population Australia. He has won numerous awards including the ANZAC Peace Prize, the WHO Medal, and the Albert Einstein World Award of Science. He was awarded an MBE for his work on control of malaria in New Guinea during the Second World War, in which Fenner served in the Royal Australian Army Medical Corps.
Professor Fenner will open the Healthy Climate, Planet and People symposium at the Australian Academy of Science next week.
Applying this mindset to biology today
Bioengineering and biology-based solutions can therefore be much more powerful if they include all the solutions biology has generated to solve problems, and not just the thin slice that exists today. Evolution may indeed exhibit certain signs of temporal directionality — because biological histories can be, and often are, contingent — but environmental conditions are what they are, and nothing more. There is no fruitful, absolute basis of comparative fitness or inherent value to be extracted from variations in, say, high or low sulfur content, increased or decreased solar insolation, or freezing or boiling temperatures.
The origin of life may be viewed as part of a continuum that links complex biology to complex geochemistry. Though the emergence of the first self-reproducing cell marked a singularity — a milestone of biological possibility that fixed the architecture of all cells to follow — there is little reason to surmise that the phase of chemical evolution that preceded biological evolution was significantly different or less complex. Recapitulating the origins of life may lead us to discover self-organizing, chemical solutions to problems that are no longer recorded in (or possible to be discovered through) living descendants.
Four billion years of struggle to survive means four billion years of living experience, of biomolecular tinkering, of exploring novelty and possibility that defy current conventional logic. It would be impossible to accumulate this magnitude of information through laboratory experimentation, and should therefore be viewed as a bioinformatic repository without comparison — a Library of Alexandria, not fully lost in the sands of time.
Treating this rich repository as such — combined with modern bioinformatics and molecular biology — not only affords us the opportunity to explore the successful “solutions” employed by both ancient and modern forms, but to leverage those solutions for modern-day problems. This goes far beyond notions of biomimicry, where engineering materials and solutions are modeled on biological ones and something visible today is copied in another form. It is about realizing that ancient biological solutions that have long since been forgotten can be entirely new, and useful, to us in the present.
The CRISPR gene editing system, for instance — arguably the future of genetic engineering (and which went from discovery to practice to Nobel Prize recognition in a relatively short timespan) — is based on a bacteriophage that isn’t even really alive at all. It is hard to imagine a more distant biological entity from us, and yet this tool (and others like it) may shape the contours of human societal evolution for the next century. In this way, reaching into the past can become a means of connecting to an unfathomable range of functional molecular possibilities. It doesn’t particularly matter if the solutions involve our direct ancestors or our far-flung distant kin.
And the array of solutions can range from the mundane to the extraordinary. We may sample more efficient carbon-harvesting techniques for mitigating climate change, create artificial life forms that readily synthesize more ecologically compatible fertilizers, or uncover how molecular language processing in translation can be modified to synthesize entirely new classes of reactive artificial enzymes. Careful observers can begin to see that the next evolutionary phase of scientific exploration of ancient life on Earth may be far more interactive and beneficial than has been imagined: an exploration of new techniques that can bring past states to life to solve our current, and future, most pressing problems.
Betül Kaçar Betül Kaçar is a professor at the University of Arizona. She directs a NASA research consortium exploring the origins, evolution, and distribution of life in the Universe.
The study of existential risks is still a tiny field, with at most a few dozen people at three centers. Not everyone is convinced it's a serious academic discipline. Most civilization-ending scenarios—which include humanmade pathogens, armies of nanobots, or even the idea that our world is a simulation that might be switched off—are wildly unlikely, says Joyce Tait, who studies regulatory issues in the life sciences at the Innogen Institute in Edinburgh. The only true existential threat, she says, is a familiar one: a global nuclear war. Otherwise, "There is nothing on the horizon."
Harvard University psychologist Steven Pinker calls existential risks a "useless category" and warns that "Frankensteinian fantasies" could distract from real, solvable threats such as climate change and nuclear war. "Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it," he writes in his upcoming book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.
But advocates predict the field will only get more important as scientific and technological progress accelerates. As Bostrom pointed out in one paper, much more research has been done on dung beetles or Star Trek than on the risks of human extinction. "There is a very good case for saying that science has basically ignored" the issue, Price says.
Humanity has always faced the possibility of an untimely end. Another asteroid of the size that ended the dinosaurs' reign could hit Earth a volcanic cataclysm could darken the skies for years and starve us all.
But existential risks arising from scientific advances were literally fiction until 16 July 1945, when the first atomic bomb was detonated. Based on some back-of-the-envelope calculations, physicist Edward Teller had concluded that the explosion might set off a global chain reaction, "igniting" the atmosphere. "Although we now know that such an outcome was physically impossible, it qualifies as an existential risk that was present at the time," Bostrom writes. Within 2 decades a real existential risk emerged, from growing stockpiles of the new weapons. Physicists had finally assembled Frankenstein's bride.
Other scientific disciplines may soon pose similar threats. "In this century we will introduce entirely new kinds of phenomena, give ourselves new kinds of powers to reshape the world," Bostrom says. Biotechnology is cheaper and easier to handle than nuclear technology has ever been. Nanotechnology is making rapid strides. And at a 2011 meeting in Copenhagen, Estonian computer programmer and Skype co-developer Jaan Tallinn told Price about his deep fears about AI during a shared taxi ride. "I'd never met anyone at that point who took that as seriously as Jaan," says Price, who was about to start working at the University of Cambridge.
Price introduced Tallinn to astronomer Martin Rees, a former president of the Royal Society, who had long warned that as science progresses, it will increasingly place the power to destroy civilization in the hands of individuals. The trio decided to launch CSER, the second such center after Bostrom's Future of Humanity Institute in Oxford, which he launched in 2005. CSER's name was "a deliberate attempt to push the idea of existential risk more towards the mainstream," Price says. "We were aware that people think of these issues as a little bit flaky."
CSER has recruited some big-name supporters: The scientific advisory board includes physicist Stephen Hawking, Harvard biologist George Church, global health leader Peter Piot, and tech entrepreneur Elon Musk. In a sign of just how small the field still is, Tallinn also co-founded FLI in 2014, and Church, Musk, Hawking, Bostrom, and Rees all serve on its scientific advisory board. (Actor Morgan Freeman, who has literally played God, is also an FLI adviser.)
Most of CSER's money comes from foundations and individuals, including Tallinn, who donated about $8 million to existential risk researchers in 2017. CSER's academic output has been "ephemeral" so far, Tallinn concedes. But the center was set up as "a sort of training ground for existential risk research," he says, with academics from elsewhere coming to visit and then "infecting" their own institutions with ideas.
The dozen people working at CSER itself—little more than a large room in an out-of-the-way building near the university's occupational health service—organize talks, convene scientists to discuss future developments, and publish on topics from regulation of synthetic biology to ecological tipping points. A lot of their time is spent pondering end-of-the-world scenarios and potential safeguards.
The release of a dangerous pathogen might cause a “crunch” in the human population.
Church says a "crunch," in which a large part of the world population dies, is more likely than a complete wipe-out. "You don't have to turn the entire planet into atoms," he says. Disrupting electrical grids and other services on a huge scale or releasing a deadly pathogen could create chaos, topple governments, and send humanity into a downward spiral. "You end up with a medieval level of culture," Church says. "To me that is the end of humanity."
Existential risks stemming from the life sciences are perhaps easiest to imagine. Pathogens have proved capable of killing off entire species, such as the frogs that have fallen victim to the amphibian fungus Batrachochytrium dendrobatidis. And four influenza pandemics have swept the world in the past century, including one that killed up to 50 million people in 1918 and 1919. Researchers are already engineering pathogens that in principle could be even more dangerous. Worries about studies that made the H5N1 bird flu strain more easily transmissible between mammals led the United States to halt such research until late last year. Terrorists or rogue states could use labmade agents as a weapon, or an engineered plague could be released accidentally.
Rees has publicly wagered that by 2020, "bioterror or bioerror will lead to 1 million casualties in a single event." Harvard microbiologist Marc Lipsitch has calculated that the likelihood of a labmade flu virus leading to an accidental pandemic is between one in 1000 and one in 10,000 per year of research in one laboratory Ron Fouchier of Erasmus MC in Rotterdam, the Netherlands, one of the researchers involved in the H5N1 studies, has dismissed that estimate, saying the real risk is more like one in 33 billion per year and lab.
One measure against "bioerror" might be to make researchers who carry out risky experiments buy insurance that would require an independent assessment of the risk and would force researchers to face up to it, Lipsitch says. Still, the most important countermeasure is to strengthen the world's capacity to contain an outbreak early on, he adds, for instance with vaccines. "For biological risks, short of a really massive, coordinated, parallel attack around the world, the only way we are going to get to a really catastrophic scenario is by failing to control a smaller scenario," he says.
Viruses are unlikely to kill every last human, Bostrom says for him and others, it is AI that poses truly existential threats. Most scenarios center on machines out-smarting humans, a feat called "super-intelligence." If such AI were ever achieved and it acquired a will of its own, it might turn malevolent and actively seek to destroy humans, like HAL, the computer that goes rogue aboard a spaceship in Stanley Kubrick's film 2001: A Space Odyssey.
Humanity's strategy is to learn from mistakes. When the end of the world is at stake, that is a terrible strategy.Max Tegmark, Massachusetts Institute of Technology
Most AI experts worry less about machines rising up to overthrow their creators, however, than about them making a fatal mistake. To Tallinn, the most plausible way in which AI could end humanity is if it simply pursued its goals and, along the way, heedlessly created an environment fatal to humans. "Imagine a situation where the temperature rises by 100° or is lowered by 100°. We'd go extinct in a matter of minutes," Tallinn says. Tegmark agrees: "The real problem with AI is not malice, it's incompetence," he says.
A current-day analogy is the 2015 tragedy in which a suicidal Germanwings pilot told his plane's computer to descend to an altitude of 100 meters while flying over the French Alps. The machine complied, killing all 150 on board, even though it had GPS and a topographic map. "It did not have a clue about even the simplest human goal," Tegmark says. To avoid such calamities, scientists are trying to figure out how to teach AI human values and make sure they stick, a problem called "value alignment." "There might be fewer than 20 people who work full time on technical AI safety research," Bostrom says. "A few more talented people might substantially increase the rate of progress."
Critics say these efforts are unlikely to be useful, because future threats are inherently unpredictable. Predictions were a problem in every "foresight exercise" Tait has taken part in, she says. "We're just not good at it." Even if you foresee a risk, economic, political, and societal circumstances will all affect how it plays out. "Unless you know not only what is going to happen, but how it is going to happen, the information is not much use in terms of doing something about it," Tait says.
Pinker thinks the scenarios reveal more about human obsessions than real risks. We are drawn to prospects "that are highly improbable while having big impacts on our fitness, such as illicit sex, violent death, and Walter-Mittyish feats of glory," he writes. "Apocalyptic storylines are undoubtedly gripping—they are a supernormal stimulus for our morbid obsessions." Sure, he says, one can imagine a malevolent, powerful AI that people can no longer control. "The way to deal with this threat is straightforward: Don't build one."
Tallinn argues it's better to be safe than sorry. A 2017 survey showed that 34% of AI experts believed the risks associated with their work are an important problem 5% said they are "one of the most important problems." "Imagine you're on a plane, and 40% of experts think that there is a bomb on this plane," Tallinn says. "You're not going to wait for the remaining experts to be convinced."
Price says that critics who accuse him and his colleagues of indulging in science fiction are not entirely wrong: Producing doomsday scenarios is not that different from what Shelley did. "The first step is to imagine that range of possibilities, and at that point, the kind of imagination that is used in science fiction and other forms of literature and film is likely to be extremely important," he says.
Scientists have an obligation to be involved, says Tegmark, because the risks are unlike any the world has faced before. Every time new technologies emerged in the past, he points out, humanity waited until their risks were apparent before learning to curtail them. Fire killed people and destroyed cities, so humans invented fire extinguishers and flame retardants. With automobiles came traffic deaths—and then seat belts and airbags. "Humanity's strategy is to learn from mistakes," Tegmark says. "When the end of the world is at stake, that is a terrible strategy."
Artificial Intelligence Applied to the Genome Identifies an Unknown Human Ancestor
Jaume Bertranpetit, researcher at the Institute of Evolutionary Biology, and Oscar Lao, researcher at the Centre for Genomic Regulation, co-led the study CREDIT: Pilar Rodriguez
By combining deep learning algorithms and statistical methods, investigators from the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG), and the Institute of Genomics at the University of Tartu have identified, in the genome of Asiatic individuals, the footprint of a new hominid who cross bred with its ancestors tens of thousands of years ago.
Modern human DNA computational analysis suggests that the extinct species was a hybrid of Neanderthals and Denisovans and cross bred with Out of Africa modern humans in Asia. This finding would explain that the hybrid found this summer in the caves of Denisova-&mdashhe offspring of a Neanderthal mother and a Denisovan father&mdashwas not an isolated case, but rather was part of a more general introgression process.
The study, published in Nature Communications, uses deep learning for the first time ever to account for human evolution, paving the way for the application of this technology in other questions in biology, genomics, and evolution.
Humans had descendants with a species that is unknown to us
One of the ways of distinguishing between two species is that while both of them may cross breed, they do not generally produce fertile descendants. However, this concept is much more complex when extinct species are involved. In fact, the story told by current human DNA blurs the lines of these limits, preserving fragments of hominids from other species, such as the Neanderthals and the Denisovans, who coexisted with modern humans more than 40,000 years ago in Eurasia.
Now, investigators of the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG), and the University of Tartu have used deep learning algorithms to identify a new and hitherto-unknown ancestor of humans that would have interbred with modern humans tens of thousands of years ago. "About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations," explained Jaume Bertranpetit, principal investigator at the IBE and head of Department at the UPF. "We know that from that time onwards, modern humans cross bred with Neanderthals in all the continents, except Africa, and with the Denisovans in Oceania and probably in South-East Asia, although the evidence of cross-breeding with a third extinct species had not been confirmed with any certainty."
Deep learning: deciphering the keys to human evolution in ancient DNA
Hitherto, the existence of the third ancestor was only a theory that would explain the origin of some fragments of the current human genome (part of the team involved in this study had already posed the existence of the extinct hominid in a previous study). However, deep learning has made it possible to make the transition from DNA to the demographics of ancestral populations.
The problem the investigators had to contend with is that the demographic models they have analyzed are much more complex than anything else considered to date and there were no statistic tools available to analyze them. Deep learning "is an algorithm that imitates the way in which the nervous system of mammals works, with different artificial neurons that specialize and learn to detect, in data, patterns that are important for performing a given task," stated Òscar Lao, principal investigator at the CNAG-CRG and an expert in this type of simulations. "We have used this property to get the algorithm to learn to predict human demographics using genomes obtained through hundreds of thousands of simulations. Whenever we run a simulation we are traveling along a possible path in the history of humankind. Of all simulations, deep learning allows us to observe what makes the ancestral puzzle fit together."
An extinct hominid could explain the history of humankind
The deep learning analysis has revealed that the extinct hominid is probably a descendant of the Neanderthal and Denisovan populations. The discovery of a fossil with these characteristics this summer would seem to endorse the study finding, consolidating the hypothesis of this third species or population that coexisted with modern human beings and mated with them. "Our theory coincides with the hybrid specimen discovered recently in Denisova, although as yet we cannot rule out other possibilities," said Mayukh Mondal, an investigator of the University of Tartu and former investigator at the IBE.
An artificial uterus, sometimes referred to as an 'exowomb  ', would have to provide nutrients and oxygen to nurture a fetus, as well as dispose of waste material. The scope of an artificial uterus (or "artificial uterus system" to emphasize a broader scope) may also include the interface serving the function otherwise provided by the placenta, an amniotic tank functioning as the amniotic sac, as well as an umbilical cord.
Nutrition, oxygen supply and waste disposal Edit
A woman may still supply nutrients and dispose of waste products if the artificial uterus is connected to her.  She may also provide immune protection against diseases by passing of IgG antibodies to the embryo or fetus. 
Artificial supply and disposal have the potential advantage of allowing the fetus to develop in an environment that is not influenced by the presence of disease, environmental pollutants, alcohol, or drugs which a human may have in the circulatory system.  There is no risk of an immune reaction towards the embryo or fetus that could otherwise arise from insufficient gestational immune tolerance.  Some individual functions of an artificial supplier and disposer include:
- Waste disposal may be performed through dialysis. 
- For oxygenation of the embryo or fetus, and removal of carbon dioxide, extracorporeal membrane oxygenation (ECMO) is a functioning technique, having successfully kept goat fetuses alive for up to 237 hours in amniotic tanks.  ECMO is currently a technique used in selected neonatal intensive care units to treat term infants with selected medical problems that result in the infant's inability to survive through gas exchange using the lungs.  However, the cerebral vasculature and germinal matrix are poorly developed in fetuses, and subsequently, there is an unacceptably high risk for intraventricular hemorrhage (IVH) if administering ECMO at a gestational age less than 32 weeks. Liquid ventilation has been suggested as an alternative method of oxygenation, or at least providing an intermediate stage between the womb and breathing in open air. 
- For artificial nutrition, current techniques are problematic. Total parenteral nutrition, as studied on infants with severe short bowel syndrome, has a 5-year survival of approximately 20%. 
- Issues related to hormonal stability also remain to be addressed. 
Theoretically, animal suppliers and disposers may be used, but when involving an animal's uterus the technique may rather be in the scope of interspecific pregnancy. [ original research? ]
Uterine wall Edit
In a normal uterus, the myometrium of the uterine wall functions to expel the fetus at the end of a pregnancy, and the endometrium plays a role in forming the placenta. An artificial uterus may include components of equivalent function. Methods have been considered to connect an artificial placenta and other "inner" components directly to an external circulation. 
Interface (artificial placenta) Edit
An interface between the supplier and the embryo or fetus may be entirely artificial, e.g. by using one or more semipermeable membranes such as is used in extracorporeal membrane oxygenation (ECMO). 
There is also potential to grow a placenta using human endometrial cells. In 2002, it was announced that tissue samples from cultured endometrial cells removed from a human donor had successfully grown.   The tissue sample was then engineered to form the shape of a natural uterus, and human embryos were then implanted into the tissue. The embryos correctly implanted into the artificial uterus' lining and started to grow. However, the experiments were halted after six days to stay within the permitted legal limits of in vitro fertilisation (IVF) legislation in the United States. 
A human placenta may theoretically be transplanted inside an artificial uterus, but the passage of nutrients across this artificial uterus remains an unsolved issue. 
Amniotic tank (artificial amniotic sac) Edit
The main function of an amniotic tank would be to fill the function of the amniotic sac in physically protecting the embryo or fetus, optimally allowing it to move freely. It should also be able to maintain an optimal temperature. Lactated Ringer's solution can be used as a substitute for amniotic fluid. 
Umbilical cord Edit
Theoretically, in case of premature removal of the fetus from the natural uterus, the natural umbilical cord could be used, kept open either by medical inhibition of physiological occlusion, by anti-coagulation as well as by stenting or creating a bypass for sustaining blood flow between the mother and fetus. 
Emanuel M. Greenberg Edit
Emanuel M. Greenberg wrote various papers on the topic of the artificial womb and its potential use in the future. [ citation needed ]
On 22 July 1954 Emanuel M. Greenberg filed a patent on the design for an artificial womb.  The patent included two images of the design for an artificial womb. The design itself included a tank to place the fetus filled with amniotic fluid, a machine connecting to the umbilical cord, blood pumps, an artificial kidney, and a water heater. He was granted the patent on 15 November 1955. 
On 11 May 1960, Greenberg wrote to the editors of the American Journal of Obstetrics and Gynecology. Greenberg claimed that the journal had published the article "Attempts to Make an 'Artificial Uterus'", which failed to include any citations on the topic of the artificial uterus. [ citation needed ] According to Greenberg, this suggested that the idea of the artificial uterus was a new one although he himself had published several papers on the topic. [ citation needed ]
Juntendo University in Tokyo Edit
In 1996, Juntendo University in Tokyo developed the extra-uterine fetal incubation (EUFI).  The project was led by Yoshinori Kuwabara, who was interested in the development of immature newborns. The system was developed using fourteen goat fetuses that were then placed into artificial amniotic fluid under the same conditions of a mother goat.   Kuwabara and his team succeeded in keeping the goat fetuses in the system for three weeks.   The system, however, ran into several problems and was not ready for human testing.  Kuwabara remained hopeful that the system would be improved and would later be used on human fetuses.  
Children's Hospital of Philadelphia Edit
In 2017, researchers at the Children's Hospital of Philadelphia were able to further develop the extra-uterine system. The study uses fetal lambs which are then placed in a plastic bag filled with artificial amniotic fluid.   The system consist in 3 main components: a pumpless arteriovenous circuit, a closed sterile fluid environment and an umbilical vascular access. Regarding the pumpless arteriovenous circuit, the blood flow is driven exclusively by the fetal heart, combined with a very low resistance oxygenator to most closely mimic the normal fetal/placental circulation. The closed sterile fluid environment is important to ensure sterility. Scientists developed a technique for umbilical cord vessel cannulation that maintains a length of native umbilical cord (5–10 cm) between the cannula tips and the abdominal wall, to minimize decannulation events and the risk of mechanical obstruction.  The umbilical cord of the lambs are attached to a machine outside of the bag designed to act like a placenta and provide oxygen and nutrients and also remove any waste.   The researchers kept the machine "in a dark, warm room where researchers can play the sounds of the mother's heart for the lamb fetus."  The system succeeded in helping the premature lamb fetuses develop normally for a month.  Indeed, scientists have run 8 lambs with maintenance of stable levels of circuit flow equivalent to the normal flow to the placenta. Specifically, they have run 5 fetuses from 105 to 108 days of gestation for 25–28 days, and 3 fetuses from 115 to 120 days of gestation for 20–28 days. The longest runs were terminated at 28 days due to animal protocol limitations rather than any instability, suggesting that support of these early gestational animals could be maintained beyond 4 weeks.  Alan Flake, a fetal surgeon at the Children's Hospital of Philadelphia hopes to move testing to premature human fetuses, but this could take anywhere from three to five years to become a reality.  Flake, who led the study, calls the possibility of their technology recreating a full pregnancy a "pipe dream at this point" and does not personally intend to create the technology to do so. 
Eindhoven University of Technology (NL) Edit
Since 2016, researchers of TU/e and partners aim to develop an artificial womb, which is an adequate substitute for the protective environment of the maternal womb in case of premature birth, preventing health complications. The artificial womb and placenta will provide a natural environment for the baby with the goal to ease the transition to newborn life. The perinatal life support (PLS) system will be developed using breakthrough technology: a manikin will mimic the infant during testing and training, advanced monitoring and computational modeling will provide clinical guidance. 
The consortium of 3 European universiries working on the project consists out of Aachen, Milaan and Eindhoven. In 2019 this consortium was granted a subsidy of 3 million euro, and a second grant of 10 M is in progress. Together, the PLS partners provide joint medical, engineering, and mathematical expertise to develop and validate the Perinatal Life Support system using breakthrough simulation technologies. The interdisciplinary consortium will push the development of these technologies forward and combine them to establish the first ex vivo fetal maturation system for clinical use. This project, coordinated by the Eindhoven University of Technology brings together world-leading experts in obstetrics, neonatology, industrial design, mathematical modelling, ex vivo organ support, and non-invasive fetal monitoring. This consortium is led by professor Frans van de Vosse and Professor and doctor Guid Oei. in 2020 the spin off Juno Perinatal Healthcare has been set up by engineers Jasmijn Kok and Lyla Kok, assuring valorisation of the research done. More information about the spin off can be found here 
More information about the project of the technical universities and its researchers can be found here: 
The development of artificial uteri and ectogenesis raises bioethical and legal considerations, and also has important implications for reproductive rights and the abortion debate.
Artificial uteri may expand the range of fetal viability, raising questions about the role that fetal viability plays within abortion law. Within severance theory, for example, abortion rights only include the right to remove the fetus, and do not always extend to the termination of the fetus. If transferring the fetus from a woman's womb to an artificial uterus is possible, the choice to terminate a pregnancy in this way could provide an alternative to aborting the fetus.  
There are also theoretical concerns that children who develop in an artificial uterus may lack "some essential bond with their mothers that other children have". 
Gender equality and LGBT Edit
In the 1970 book The Dialectic of Sex, feminist Shulamith Firestone wrote that differences in biological reproductive roles are a source of gender inequality. Firestone singled out pregnancy and childbirth, making the argument that an artificial womb would free "women from the tyranny of their reproductive biology."  
Arathi Prasad argues in her column on The Guardian in her article "How artificial wombs will change our ideas of gender, family and equality" that "It will [. ] give men an essential tool to have a child entirely without a woman, should they choose. It will ask us to question concepts of gender and parenthood." She furthermore argues for the benefits for same-sex couples: "It might also mean that the divide between mother and father can be dispensed with: a womb outside a woman’s body would serve women, trans women and male same-sex couples equally without prejudice." 
The Five Biggest Threats To Human Existence
In the daily hubbub of current 𠇌rises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.
Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate.
But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.
We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.
Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).
If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.
With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final.
Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them.
Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.
1. Nuclear war
While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable.
The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year.
Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.
A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk.
Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible.
The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this.
2. Bioengineered pandemic
Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe.
Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.
Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse.
Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on.
The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.
Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.
The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.
Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for.
Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance.
It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set.
The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.
This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem.
Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.
The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree.
The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting.
Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.
We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.
5. Unknown unknowns
The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.
The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help.
Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.
Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth.
You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.
The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.
Anders Sandberg works for the Future of Humanity Institute at the University of Oxford.
This article was originally published on The Conversation. Read the original article.