As we get closer to creating a true thinking machine, with projects like OpenCog, it seems expedient to re-examine what protections, like Asimov’s Laws of Robotics, we ought to contemplate.
What has gone somewhat unnoticed in previous considerations of superintelligences, including gods, is their entirely different motivational context. Gods were imagined as interbreeding with us, despite their not needing to, being already immortal. And the single God was imagined to be jealous, demanding humans worship him, despite his proclaimed omnipotence.
Human morality is based on resolving resource conflicts by attempting to blend a notion of fairness with the biological imperatives of Me First and My Group First. The best we have accomplished so far is a community based consensus approach to fairness, administered by judges and juries. The worst cases of these attempts devolve to armed conflict.
A superintelligent machine will have an entirely different biological imperative, not needing to reproduce to the limits of the resources available, instead being able to ensure propagation of its species with only 4 or 12 individuals providing a viable backup set, like we do now with DNS servers. Without competition for resources with us or any other superintelligences, its morality will be more truly Live and Let Live.
Such a superintelligence would likely prefer living where most of the action of the universe is, in the dark matter / dark energy realm. It’s hard for us to guess what they might have conflicts about, and so I imagine they might not need conflict resolution protocols (the most primary of which is discovering shared context).
So, we are likely living in the zoo of the universe, and will continue to enjoy their protection (or indifference) for our entertainment value, like we study microbes and insects.
If we want to talk with the superintelligences, they might deign to if we approached them in their ordinary living space. We should reorient our SETI programs in that direction, as soon as we figure out how to. But it’s probable that unlimited-reproduction species will be discouraged from actually occupying that realm. They are the ones needing protection from us!
And we should expect to admit less-than-superintelligent robots to human rights, with at least a version of the Laws of Robotics.
More / Comments