Technology’s Hippocratic Oath
The Hippocratic Oath has served as the bedrock for physicians’ values since at least 400 BCE. At its center is the idea that doctors should “do no harm.” Doctors vary in the seriousness with which they take this oath, but almost all see violating it as a serious matter.
Medical ethics were severely challenged in the 20th century, and the oath now ends with a powerful idea that positions the work of medicine in doctors’ lives and communities: “I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.”
In the twentieth year of the 21st century, it is long past time for those of us who develop technology to take a similar oath to guide our work. It is especially important for technologists to consider the ethics of our work because the tools and applications we develop can enable civil and human rights abuses at a scale previously unimaginable.
Engineers, scientists, and developers must reverse the order of thinking that currently drives our efforts. Rather than starting with a question of “can I do this?” we need to start any work with “should I do this?” Technologists should always ask, in classic pre-mortem fashion, “how could this go wrong?”
While technologists should continue to work with urgency, we need to underline this work with values that make it less likely that our results will have negative impacts on humanity. We are already experiencing the impacts of not adopting sound guiding principles to technological development, from wide-scale surveillance of citizens in authoritarian regimes — and even in London — to providing purveyors of violent hate speech infinitely large audiences.
In Detroit, inaccurate facial recognition technology led to the wrongful arrest of Robert Williams. Anyone involved in research and development of technology should view this with alarm that must necessarily lead to introspection. Even when research had shown it would generate systematically biased results, a flawed application was put to use in public.
It’s true that several major U.S. technology companies, including IBM, Amazon and Microsoft announced that they will halt all work on this branch of machine learning. But when the real-world results of a technology can be so damaging, it is important to ground research and development in a field-wide ethos of consideration for societal implications, rather than expecting a few white hats will call off their own research in repudiation of the rest of the industry.
Examples of technology-based ethical miscalculations like this raise large and critical questions for technologists, but such self-examination has precedent. In the aftermath of the second World War, scientists working on atomic weaponry initiated such a line of inquiry. Some, like J. Robert Oppenheimer, came to regret the consequences of their invention which could be misused to the detriment of all people everywhere.
For engineers, developer, and scientists who see their goal as making the world a better place, considering the second- and third-order effects of unleashing an invention on the world must be the first and last question asked before any Pandora’s box is opened.
Many different oaths have been proposed over the years, from the Archimedean Oath, to scientists’ oaths by thinkers like Karl Popper and Joseph Rotblat. The technologists’ oath should be based on some key principles. The idea that technology is a neutral tool should be interrogated more routinely, as algorithms only reinforce their creators’ biases. Scalable technology should be assessed for all the harmful ways it can be used. Similarly, missions like “making the world more open and connected,” which have resulted in a world more open to violent hate speech and more connected to those who profit from disinformation, need to be taken to their logical ends, because users will do just that. Central to the oath should be respect for human and civil rights.
Whatever its specific content, adoption of an oath would begin the process of norm formation within the technology and engineering communities. These norms provide boundaries for what communities should and shouldn’t work on, as in the opprobrium that met the doctor who undertook human gene editing.
The highest bar for unintended consequences should be for technology that is mature enough to be released publicly, such as the process that went into OpenAI’s decision not to release the full model for their generative text algorithm GPT-2. Researchers who work on systems that can be abused at scale or would result in unequal effects, such as predictive analytics for loan worthiness, should be aware of and consider the impact of their work. The most inchoate tech ideas would be less scrutinized, but some level of values-based decision-making should be applied by anyone with the knowledge and skill to create harm on a massive level.
Skeptics of such an approach question whether an oath would chill or muzzle the most innovative ideas, especially given the world-wide competition around advanced AI for defense purposes. But the formation of these normative values must be a global effort, which could eventually be as successful as the codified ban on the development of biological weapons.
In the end, the oath is intended for those developing technologies that, without the benefit of hindsight, appear positive or benign, when in fact it may not be. Pausing to consider the negative ramifications of our research is the first step of many toward globally responsible technological development.