Bring on the technology bans!

In mid-July 2019, Oakland, California, became the third U.S. city to ban municipal departments from using facial recognition technology. Meanwhile, Congress began hearings on whether and how to regulate it on a national level. In a surprising moment of bipartisan consensus, the only thing lawmakers fought about was how extensive restrictions ought to be.

This response to a powerful, potentially invasive technology is a sign of how the public and policymakers might respond to future technological developments – especially those using artificial intelligence. Not only does facial recognition allow Facebook to automate people-tagging in photos, but it also supercharges law enforcement’s ability to track down crime suspects. Ethical questions abound. As Georgetown’s Center on Privacy and Technology put it, facial recognition could lead to “a world where, once you set foot outside, the government can track your every move.” And it’s just the beginning.

Cameras are already watching many American streets. AP Photo/Matt Rourke

On the horizon is a flood of digital innovations that could be at least as powerful, wide-ranging and controversial: “deepfake” videos showing people doing things they never did, the “internet of things” constantly monitoring private homes, manipulative virtual reality, self-driving cars overwhelming communities and more.

I’m a researcher studying digital technology’s societal impacts, and it’s my job to stay informed about upcoming technologies and to project future outcomes. But, with more and more innovation, there is less and less time to reflect on the consequences. Many of my colleagues feel the same.

To tame this onrushing tide, society needs dams and dikes. Just as has begun to happen with facial recognition, it’s time to consider legal bans and moratoriums on other emerging technologies. These need not be permanent or absolute, but innovation is not an unmitigated good. The more powerful a technology is, the more care it requires to operate safely.

Little urgency

There’s not a pressing need for most new digital technologies. Some innovations, of course, are almost completely positive: anesthesia, electric light, radio, vaccines. But today’s society often celebrates innovation for its own sake, even when the benefits are questionable – and more and more, the benefits are indeed questionable.

Is it really worth a crowded, buzzing sky filled with drones to get one-hour delivery of consumer goods, instead of delivery in 24 hours, or even two days? Is virtual reality so great that children should, effectively, grow up with their eyes glued to video screens? When governments can conduct hard-to-trace assassinations by drone, is anyone truly safe? Scanning lists of possible future technologies can incite more fear than hope.

These types of innovations repeatedly fail to provide overall improvements in truly meaningful ways, like how deeply people love each other, how compassionately people care, how well society supports the less privileged, or how wisely humans steward the planet. If anything, technology appears to amplify humans’ moral weaknesses by coddling people with consumer comforts and echo chambers. The last half-century has seen a golden age of digital innovation, yet rates of poverty have stagnated, inequality has soared and sustainability seems farther out of reach.

Most of the technological advances in the works today won’t address those problems; they’ll tackle smaller annoyances that there’s simply no rush to relieve.

Plastic bottles sounded like a great idea, but they’re clogging oceans and beaches. AP Photo/Matt Dunham

Harms nearly certain, but unclear

New technologies always have unintended consequences – often negative – and innovators always underestimate how bad they’ll be. Pesticides have caused public health scourges. Plastic bottles have polluted the oceans. Smartphones are contributing to a teenage mental health crisis.

Consider what an AI system might do if directed to do something obvious – like maximize profits, using all the information and tools at its disposal. It might hold embarrassing personal information for ransom to coerce users to purchase goods, or extort criminal actions from people with darker secrets.

Nothing has yet stopped online stores’ algorithms from lying to increase sales, nor curbed Facebook’s actual ability to manipulate users’ moods. Tech companies routinely treat their customers as experimental guinea pigs, and are already applying artificial intelligence systems for a range of purposes.

If these are just the known effects of tech companies; efforts and innovations, imagine what unintended consequences might lurk. The premise of the popular game “Universal Paperclips” is that an AI focused on optimizing a business ends up destroying the known universe. Science fiction is rapidly becoming science fact.

Difficult to go backwards

Once unleashed, digital technologies are particularly difficult genies to put back in the bottle. In this respect, they differ from other advanced technologies. Soon after World War II, activists began to call for bans on nuclear arms, culminating in the Non-Proliferation Treaty in 1970. The treaty has been effective in keeping an 80-year-old technology limited to just eight or nine countries – that’s an impressive feat, especially across the jagged history of global politics.

U.S. and Soviet officials sign the Nuclear Non-Proliferation Treaty in 1968. US State Department

Nuclear weapons, however, require significant resources to design, build, test and deploy. By contrast, digital technologies are easy to share, making them even harder to control. Advanced hacking tools have been stolen and shared online: Techniques developed by the U.S. National Security Agency have been used in global cyberattacks by China, Russia and North Korea. Their software is now available to anyone with an internet connection.

An imbalance of power

Technology companies pushing their advances have money, influence and time on their side. The millions of lobbying dollars they spend are pocket change when compared to their multi-billion-dollar profits, and they can keep the funding going indefinitely, waiting out news cycles and activist energy.

In my view, uncertainty about how new technologies will affect society overall means that skeptical forces deserve more support. Bans and moratoriums would mean that rich, powerful entities would have to seek legal and societal permission before unleashing their potential monsters onto the market. That doesn’t seem like too much to ask.

There are many reasons to continue to build new technologies – to remain globally competitive, to advance human knowledge and to prepare for potential future crises. Technology has its benefits. But slowing the pace of its advance would give society more time to think through the consequences and debate which aspects of new technologies are desirable, and which should be outlawed.


Kentaro Toyama, W. K. Kellogg Professor of Community Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.