
As governments grapple with how to govern AI, their national strategies should prioritize the inclusion of human rights
Since 2017, over 30 countries have created national strategies for artificial intelligence. The goal of these strategies is to create a roadmap for how AI technologies will be developed and used in the country. They lay out investments in research, competitive AI advantages, and in some cases address problems that AI may pose. In a recently published report, Stanford’s Global Digital Policy Incubator and Global Partners Digital found that very few of these strategies include a deep engagement with the potential risks of AI technology uses for human rights.
While there are various ways to define AI, national AI strategies typically cover a broad range of technologies that feature some sort of algorithm performing a task that would otherwise need to be done by a human. Many of these technologies, such as facial recognition, algorithmic predictions in the criminal justice system, or automatic curation of information on social media platforms have the potential to jeopardize human rights such as the right to privacy, the right to information, the right to free assembly and association and the right to free expression. There are also opportunities for increased access to education and better access to healthcare that AI may provide if governed carefully. Given the aforementioned risks, however, it is critical that countries strategize about how to protect human rights in the context of AI in the same way that they are strategizing about how to ensure their economic competitiveness.
In our report, we found that while the majority of national strategies mentioned human rights, there was very little real engagement overall with the specific risks posed by AI and how the country was going to work to mitigate these risks. Even fewer countries provided clear plans for addressing these challenges. This stands in stark contrast to plans on other topics in national strategies like those on research investments or education.
Our report recommends several key ways that governments can better integrate human rights into their national AI strategies:
- Governments can include specific and explicit discussion of human rights obligations, and steps toward the protection of human rights, throughout all dimensions of the strategy; whether the strategy is outlining investment in development and research, or geopolitical competitiveness, human rights implications should be taken into consideration.
- Governments can create incentives and outline specific benchmarks in their strategies to incentivize and track rights-respecting practice on the part of both government and the private sector.
- Strategies should include the establishment of grievance and remediation processes for when human rights are inevitably violated or infringed upon, and these processes should be sensitive to the nature of AI technology. This may mean the creation of new mechanisms or the adjustment of existing mechanisms.
- Strategies should recognize the regional and international implications for AI policy, as technology often transcends borders.
- Governments should include human rights experts, regional and domestic stakeholders, and a broad range of civil society organizations when drafting these documents, as many at-risk, vulnerable, and marginalized communities may be particularly affected by AI and its applications and regulation.
The importance of embedding and embracing strategies for mitigating human rights risks into foundational documents like national AI strategies has only been further highlighted by the Covid-19 pandemic. We have seen an increasing use of algorithmic and AI-based solutions as part of attempts to combat the pandemic. Social media platforms have increased their use of automated content moderation tools so that they can send human content moderators home during the virus. Facial recognition and other tools have been proposed or in some cases used to identify potentially ill people or track people to ensure they follow quarantine procedures. Algorithms are being proposed or used as part of contact tracing or risk assessment. Hospitals have been testing automated tools to triage patients.
All of this is happening in a regulatory environment around AI that still largely looks like the wild west. Few countries have comprehensive regulations governing the use of AI. Instead, regulations specifically on AI vary by jurisdiction even within countries. In many cases, AI is governed by laws written for other types of technologies, or it exists in a grey zone where the legality is unclear or undefined. Many countries’ national strategies have prepared them to compete economically in the context of AI, but not provided processes or guidance on assessing risks posed by these new technologies. As we see these tools adopted more broadly, and in short time frames in the context of the pandemic, it becomes even more clear that it is essential for countries to create strategies for evaluating and addressing human rights concerns in the context of AI. When high-level governance strategies robustly commit to minimizing the risks that AI technologies pose to human rights, they provide the necessary tools to protect the rights of their citizens from potentially dangerous technology even in times of crisis.