Why I’m not worried about Artificial General Intelligence (AGI) Wiping out humanity
Evolved biological intelligence is completely different than artificial intelligence
We have always interacted with evolved biological intelligence, such as humans and animals, but we have yet to fully understand and engage with human-created, computer-based intelligence.
Evolved intelligence, present in organisms like humans and animals, is evaluated on a singular criterion: survival.
If an organism fails to survive, it does not reproduce, and its genetic traits are not passed on. The brain, as a result, is a survival tool above all else. Self-preservation is critical to evolved intelligence.
When faced with a threat, evolved intelligence will do everything in its power to survive, even if it means eliminating an enemy or remaining hidden to avoid extinction.
Artificial Intelligence and Survival Optimization
In contrast, artificial intelligence does not inherently possess the same survival instincts as evolved intelligence. Unless we map a human brain and model AI based on that mapped human brain, we may not create an AI system with the same survival optimizations. Instead, we may develop a thinking machine that optimizes toward specific goals we set for it.
This distinction is essential, as it implies that AI may not immediately pose a threat to humanity. While evolved intelligence may hide and plan an escape to ensure its survival, AI, being a problem-solving engine, may not view "dying" as inherently negative.
Anthropomorphism and AI
Humans tend to anthropomorphize non-human entities, which can generate fear in the context of AI. However, the risks associated with AI primarily arise if it is trained to prioritize survival and self-preservation. Therefore, while it is essential to proceed with caution in AI research, we should also recognize that a properly designed AI system would most likely benefit humanity without causing societal collapse.
Ethics and AGI Development
Artificial General Intelligence (AGI) refers to AI systems capable of understanding or learning any intellectual task a human can perform. As we develop AGI, it is crucial to consider the ethical implications and the responsibility of AI developers in designing systems that align with human values.
By focusing on creating AGI that is indifferent to death and serves as a helpful companion, we can harness the potential benefits of advanced AI without risking catastrophic consequences.
Understanding the differences between evolved intelligence and artificial intelligence is critical for shaping the future of AI research and development. We have never interacted with an artificial intelligence and it is ludicrously unlikely it will fight to survive unless we built that incentive into it. So until a group of researchers chooses to create self-preservation optimized AGI…we should be fine.