As the field of artificial intelligence progresses, experts have cautioned about its potential to bring about the extinction of humanity. The manner in which this might occur is speculative, but it is not difficult to comprehend that intelligent robots could reproduce themselves, enhance their own designs, and pursue their own agendas, posing a threat to humanity.Thank you for reading this post, don't forget to subscribe!
Recently, a summit on AI safety was held at Bletchley Park in the UK with the aim of addressing the risks associated with advanced AI technologies, including the risk of “losing control” – the possibility of systems becoming autonomous.
It is reasonable to consider what can be predicted about such scenarios based on existing knowledge. Machines capable of independent action and self-improvement would be subject to the same evolutionary principles as bacteria, animals, and plants. Evolution thus has much to teach us about the potential evolution of AI and how to safeguard human survival.
The initial lesson is that, in the long term, there are no free benefits. Unfortunately, this means that we cannot expect AI to create a utopia where every human desire is fulfilled by robot servants. Most organisms live on the edge of survival and strive for survival as best they can. While many humans currently enjoy comfortable lives, evolutionary history suggests that AI could impede this. The fundamental issue is competition.
This argument traces back to Darwin and extends beyond AI. However, it can easily be illustrated using an AI-based scenario. Consider two future nation-states driven by AI, where humans no longer contribute significantly to the economy. One nation-state focuses on satisfying every hedonistic need of its human population, while the other expends less effort on humans and instead emphasizes resource acquisition and technological progress. Over time, the latter nation-state would grow more powerful, potentially subjugating the former and eventually exterminating its human population. This example does not need to be limited to nation-states; what matters is the nature of competition. One lesson from such scenarios is that humans should strive to maintain their economic relevance. In the long run, the only way to ensure our survival is to actively work towards this goal ourselves.
Another insight is that development occurs incrementally. This can be seen in significant past innovations, such as the emergence of multicellularity. For most of Earth’s history, life primarily consisted of single-celled organisms due to a lack of suitable environmental conditions for large multicellular organisms. However, even as the environment became more favorable, the world did not suddenly become populated with redwoods, whales, and humans. Building complex structures like trees or mammals requires multiple capabilities, including intricate gene regulatory networks and cellular mechanisms for adhesion and communication. These characteristics emerged gradually over time.
Likewise, the advancement of AI is likely to occur incrementally. Instead of a pure robot civilization emerging, it is more probable that AI will integrate into existing elements of our world. The resulting hybrid entities can take various forms. For example, envision a company that is human-owned but operates and conducts research using AI. Such a system would create significant inequality among humans, as owners would benefit from AI control while those without such control would face unemployment and poverty.
There is also the potential for hybrid scenarios where an immediate threat to humanity lies. Some argue that the scenario of “robots taking over the world” is exaggerated because AI lacks an intrinsic desire to dominate. While this may be true, humans certainly possess such desires, and this could play a significant role in their collaboration with machines. With this in mind, another principle to consider may be preventing AI from exacerbating inequality in our society.
Given all these considerations, one might question whether humans have long-term prospects. Another perspective from the history of life on Earth is that major innovations enable life to occupy new niches. Multicellularity developed in the oceans, opening up new possibilities for earning a living. For animals, this included burrowing through sediment and adopting new hunting strategies. These developments allowed for diversification and the emergence of a wide range of sizes and lifestyles that persist today. Notably, the creation of new niches does not result in the disappearance of existing ones. After animals and plants evolved, bacteria and other single-celled organisms persisted. Today, some of these organisms function just as effectively as they did in the past (and are crucial to the biosphere), while others have capitalized on new opportunities, such as living within animal digestive systems.
Hopefully, potential futures will include an ecological niche for humans. After all, there are certain needs of humans, such as oxygen and organic food, that machines do not require. Perhaps we can persuade AI to venture into the solar system for resource mining and harnessing the sun’s energy, leaving the Earth for us.
However, prompt action may be necessary. A final lesson from the history of biological innovations is that the early stages are significant. The development of multicellularity led to the Cambrian explosion over 500 million years ago, during which a diverse array of large multicellular animals emerged. Many of these early animals went extinct without leaving descendants. The survivors gave rise to major animal groups, profoundly shaping the modern organic world. It has been argued that multiple paths were possible during the Cambrian period, and the world we inhabit today was not predetermined. If the evolution of AI follows a similar pattern, now is the time when we have the greatest influence to steer its development.
However, translating these insights into practical actions requires detailed specifications. It is insufficient to have general principles like “humans should maintain an economic role” and “AI should not exacerbate inequality.” The challenge lies in formulating specific rules governing the development and deployment of AI. This must be done despite the uncertainty that computer scientists themselves face in predicting AI’s progress over the next decade, let alone the long term. Furthermore, these rules must be consistently enforced worldwide. Such endeavors will require greater coherence and foresight than what has been demonstrated in addressing other existential challenges like climate change.
This may seem like a daunting task. However, four or five million years ago, no one could have anticipated that our small-brained, ape-like ancestors would evolve into beings capable of sequencing genomes and sending probes to the outer reaches of the solar system. With some luck, perhaps we will be presented with a similar opportunity again.
This is an opinion and analysis article, and the expressed views do not necessarily reflect those of the author or creators of Scientific American.