Superintelligence: Paths, Dangers, Strategies by Nick Bostrom.

Nick holds an MA in Philosophy and Physics from Stockholm University, an MSc in Computational Neuroscience from King’s College London, and a PhD in Philosophy from the London School of Economics.
He remains a Professor at Oxford University where he serves as the founding Director of the Future of Humanity Institute. He is a former lecturer at Yale University and British Academy Postdoctoral Fellow at Oxford.
Nick has been recognized as one of the most respected philosophers about AI safety and ethics. As a result, “Superintelligence” has been noted as a critical read for anyone to understand AI systems development.
We should not be surprised to see a respected Philosopher ask: what happens when machines surpass humans in general intelligence? With so many question surrounding this issue, he addresses a foundation for understanding our future and intelligent life. I found this book extremely fascinating.
AI will be more intelligent that humans. Then what?
So, a human brain has capabilities that other animals lack. This allows humans to be dominant. When AI does in fact surpass a human’s general intelligence, this new superintelligence may become not only powerful, but possibly beyond human control.
However, Nick is clear to communicate that humans have one advantage: we get to make the first move. In fact, it will it be possible to construct an early Artificial Intelligence and engineer conditions in order to make humans survive.
Please consider perhaps one of the most respected AI books which won the 1973 Pulitzer Prize Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R Hofstadter:
As should be expected, Nick is also addressing ethical questions regarding the development of a superintelligence and the key responsibilities forced upon those who develop the end stage technologies. So in effect, Nick is also advocating for multiple proactive measures at national and international levels to mitigate risks associated with AI development.
In conclusion, while written ten years ago Nick explores the difficulties in controlling superintelligent systems. He is proposing our traditional methods of oversight may be inadequate once an AI surpasses human cognitive capabilities. And now we find Agentic AI gaining ground in mid 2024.