Driving Genocide: AI, Autonomous Cars, and Human Rights

Concerns about artificial intelligence (AI) creating a new class of weapons that kill without human guidance has led governments, industry titans, frontline tech workers, and human rights activists alike to demand that they be brought to heel or banned altogether. But as important as it is to control emerging technologies that could take the human out of the loop of life and death decisions, attention should also be given to civilian industries being transformed by AI that touch everyday life in areas such as health care, transportation, and sensitive government functions such as policing and justice. Left unchecked, these emerging capabilities and the data they generate could make the humans in the loop of government persecution more efficient and turn the companies offering these products and services into partners in oppression.

Consider Myanmar and transportation. The nation’s economic isolation eased after political changes saw opposition leader Aung San Suu Kyi go from prisoner to state counsellor. Long before the United Nations called for military commanders in Myanmar to be prosecuted for genocide in 2018, early warnings, as far back as 2012, surfaced of brutality against the Rohingya, an ethnic Muslim minority, by the governing regime. Unfortunately, these warnings were ignored by the United Nations and foreign governments focused on national political reforms as well as by international investors eager to tuck into the country’s largely untapped fifty million strong market. If today’s automakers sold a fleet of vehicles to the Burmese government during the honeymoon period that were later used in crimes against the Rohingya, the companies would likely avoid direct blame since direct ties to their goods end upon sale.

The relationship between product and producer will change, however, when Internet-connected and AI-powered products and services become widespread in the years to come. In the case of autonomous vehicles, instead of getting out of the loop once sales are booked, companies will be permanently on the loop supervising and affecting vehicle performance and use. They will also be amassing, owning, and monetizing vast quantities of data about networks of vehicles and their passengers. In fact, this new way of doing business is leading some companies to consider eliminating traditional sales in favor of retaining ownership and offering subscription-based transportation services. Companies offering services in countries with checkered human rights records will no longer be able to claim innocence when their vehicles are used in atrocities. Firms conducting business as usual during times of repression will find themselves providing a suite of services from basic logistics to advanced surveillance and targeting capabilities that will turbocharge state sponsored thuggery. Avoiding this future not only requires action by governments, multilateral bodies, and the international human rights community, it demands that companies deploying AI take steps to measure, manage, and mitigate the risks that new AI-powered products, services, and business models could undermine human rights as well. Here are a few recommendations these groups could take to address this issue:

First, there should be zero tolerance for corporate ignorance about human rights risk. Companies should adopt “Know Your Customer” methods, commonplace in banking, to carry out human rights-related due diligence on the markets they serve. Tapping respected human rights league tables would be a good start but given the lifelong, hand in glove relationship between companies and their AI-powered goods, risk profiles should reflect changing conditions on the ground. Put simply, companies must have the same capacity for gathering and consuming intelligence on human rights conditions as they have for information about competitors, the state of the economy, and investor sentiment.

Second, companies should have internal policies and protocols to manage how their products and services are used. This includes implementing a chain of communication which keeps senior leaders informed about the company’s exposure to joining human rights crimes and a chain of command, including the board and CEO, that facilitates action. Contracts, data protection measures, and checks on local agents, whose economic self interest may blind them to risks, should be adopted to fit the market’s risk profile and allow the firm to stay on the right side of responsible business practice. And lastly, each company’s board must make avoiding exposure to human rights risk a brand and reputation priority for the business while also holding the CEO accountable for keeping the company out of such situations.

Finally, steps should be taken to ensure the marketplace pushes industry to avoid human rights dilemmas from the get-go. Influential investors should send clear signals that human rights matter by adapting environmental, social impact, and governance (ESG) principles guiding where they put their money to reflect new technology-related concerns. Companies who put their reputations in jeopardy by being inattentive to human rights risks should feel the financial wrath of their investors. Industry should put its muscle behind robust data protection, foreign investment, and codes of conduct that both support human rights generally and make it more difficult for abusive governments to press companies into service. Finally, industry should work with human rights and technical experts on how and when to tap being on the loop and scale back support to goods, disable them, or make entire markets go dark. While industry cannot stop human rights atrocities from occurring, companies can slow the terror by denying those striking the blows additional muscle.

Trooper Sanders is a Rockefeller Foundation Fellow. The views expressed here are the author’s alone.