Trust In AI?

Can We Trust AI When We Can’t Even Trust Ourselves?

To build trusted AI and trusted technology we need to admit there is a trust gap with the people behind AI. Photo by Alex Knight from Pexels

What Is Trust?

As I was leaving for a three-month family leave, a Director assigned to temporarily support my team came to my office to wish me the best.

“You can trust me,” the Director said as I walked out.

That was my plan. I had no reason not to.

“You know how there are people who blame the last guy for anything that goes wrong?” he said. “Well, I’m not going to do that.”

“Oh,” I said. “OK.”

Who says that? I thought. Only the guy who is about to screw you over.

If trust means reliance on others to meet the expectations they set, the Director was explicitly telling me to trust him, but implicitly, he was warning me not to. Or at least dropping an obvious hint. When short-sighted self-interest drove him to seize control of my high-performing team permanently, I was disappointed but not surprised. Did he break my trust? Not really. With that parting comment, the Director had already lost my trust. But I never imagined a colleague would be allowed to go that far. Both the individual and the system had failed.

When I returned, I relied on my confidence to recover and build a new team and a new set of projects. The Director acted like nothing unusual had happened. Like his lying didn’t matter. But losing my team and projects affected how I showed up. At work and outside of work.

Trust & AI

When we want to build trust through our businesses, products, teams, and community, we must ask one fundamental question. Are we able to be honest with ourselves? Variations of this ultimate trust question could include:

  • Are we able to be honest and transparent (in a relevant way) with others?
  • Are our existing structures and systems trustworthy?
  • Do we want to take on the responsibility of trust?
  • Do we want to win trust now but are willing to break it later when others don’t have a choice or we can get away with it?

Let’s explore that last bullet with follow-up questions. Do we want to win our users’ trust to use a “free” service while also littering the back end with complexities and loopholes that allow us to sell or use their data? Do we feel they have enough clues to discern? Like, what part of “cost of free” did they not understand? Do we do it because everyone else is doing it? Do we do it to survive? Or do we have options to engage with integrity? Are we looking to build long-term partnerships and loyalty? Find a way to do the right thing for us and the right thing for users?

These questions are especially relevant when machine learning and AI (Artificial Intelligence) are used to establish trust-based connections between recruiters and job seekers, between content and consumers, between caregivers and caretakers, parsing out relevance, inferences, and recommendations. These systems and algorithms are perpetually optimized based on the metrics we use to reward or penalize, the data we give them access to use, and the autonomy of decision-making. They are critical when the stakes are high — think law enforcement or surveillance that encroaches on our autonomy, privacy, and intentions.

Trust involves a leap of faith. When we ask if we can trust AI, we are really asking: Can we trust the people who are vouching for the AI: designing, paying for, making, and using the systems? It’s ultimately and almost always, about us.

What Does Trust Mean in Artificial Intelligence?

In Feb, 2020, EU released the intention of defining and ultimately regulating trusted and transparent AI, prompting Google’s CEO to support AI regulation as “too important not to” while nudging them to take “a sensible approach” while the White House released its own letter to the EU advising it not to kill innovation. In June 2020, IBM, Amazon, and Microsoft joined the San Francisco and Seattle bans on facial recognition. The definition of “sensible” has evolved as trust in our systems — human and machine — around policing and facial recognition are under increased scrutiny. Even prior to the post-riot awareness of racism in America, heads of AI and data science departments in China, Europe, and Asia, leading researchers, and public interest groups have been asking a common question: How do we build trust in AI? And can there be a consensus on approach and solution to the answer and to our need for trusted technology?

When industry organizations and institutions like IEEE and EU forums use keywords like “Trusted AI,” “Trust in AI,” and “Trustworthy AI,” they are talking about how to ensure, inject, and build trust, ethics, explainability, accountability, responsibility, reliability, transparency into our intelligent systems. They ask for transparency: How closely do the systems meet the expectations that were set? Are we clear on the expectations, the data, and methodology used?

This is a tricky concept for many reasons. But mainly because AI is a technology used by people, businesses, products, and governments. So the trust and confidence is ultimately placed in the people, businesses, products, or governments who have their own assessment about the reliability, truth, ability, and strength of their AI-powered solutions. It is often not one person or one system, but a series of interconnected systems and people. And any definition, methodology, or system can be used for different purposes depending on our hopes and fears. It can be changed, improved, or misused. The industry is finally banning facial recognition because it can no longer deny that we can’t trust the people who are going to use it.

What Will It Take to Build Trusted AI?

Trust in AI involves at least two key sets of dependencies.

Dependency Set 1: Trust the decision makers.

This includes leaders and entities — institutions, countries and companies who are building these solutions. What we know about them matters. Who has a seat at the table matters. Their goals and motivations matter. It all goes into, three key questions:

  1. How much do we trust the decision makers? And those influencing them?
  2. Are they visible? Can we figure out who is involved?
  3. Do they make it easy for us to understand where AI is being used and for what purpose (leveraging which data sets)? Which loops back to the first question.

Trust with AI depends on the leader’s and entity’s track record with other decisions. Do they tend to pick the trustworthy partners, vendors, solutions or even know how to? Drive accountability? Bring in diversity? For example, when the current pandemic hit, consider, who did we trust?

Dependency Set 2: Build trust into our AI systems.

This second set of trust dependencies cover the technical practices or tools that give the decision makers the ability to build reliability, transparency, explainability, and trust into our AI systems. This is where most of the debates are happening. I have participated in technical forums at Linux Foundation, IEEE, and for Machine Learning performance benchmarking along with many industry and university debates. Almost every forum begins with principled statements leading to practical strategies and realistic considerations of time and cost.

  1. Alignment on definition: What do we mean by explainability? Where is it applicable?
  2. Technical feasibility: What is possible?
  3. Business & Operational consideration: What is practical and sustainable?
  4. Risk & Reward: What are the consequences if we fail or don’t act?
  5. Return on Investment: How much trouble/cost are we willing to bear to try to prevent potential consequences?
  6. Motivation & Accountability: How likely are we to be found out or held accountable? What can and will be regulated?

These are not easy questions to answer. Since AI is entering almost every system and every industry, relevancy becomes important. For example, transparency can bring much needed accountability in some cases (criminal justice). It can also be used to overwhelm and confuse if too much information is shared in difficult to understand formats (think liability waivers). Or be entirely inappropriate, as in private and sensitive scenarios.

Open source and technical, policy, and public interest communities around the world have been trying to drive consensus. While companies and institutions building, selling and using AI systems continue to make their own decisions. Regulations have always trailed innovation. And they have their own set of accountability challenges.

So, what do we do?

Change the Normal

An open-ended question that continues to challenge us is how we will build self-regulation and motivation into AI when businesses are measured on short-term gains and market shares — time and money.

Motivation and accountability is needed for responsible AI, trusted AI, and ethical AI. We need a common framework or at least a common set of values — a set of definitions, principles, best practices, tools, checklists, and systems that can be automated and built into products to become trusted technology.

All the while, we know our second set of considerations are almost always influenced, usurped, manipulated, or ignored by the first set, the people using AI. Their goals and their metrics. For real change, we need business cases for long-term impact that can be understood and developed.

This is where we have a potential glimmer of hope. If we design amazingly robust, trustworthy technology and systems, could it be harder for people to misuse or abuse them? If the shortcomings, biases, and insights into how decisions are made are clearly visible and we are able to anticipate outcomes by running different simulations and scenarios, we would be able to correct the inequities and unfairness of our past much faster? Could the relevant transparency, built into our systems and processes, give us an opportunity to create checks and balances as well as a level playing field and steer our institutions and leaders towards greater trustworthiness and reliability by proxy . . . rather than waiting to be shamed or found out?

Yes, in many ways, this is naively optimistic. The same tools could end up giving those with power a stamp of approval without anything really changing. It’ll become a more sophisticated opportunity to confuse or subterfuge. A coverup. Move people’s attention to AI, to technology, rather than those who are using it to wield power. But this is where community becomes critical.

We humans may be slow in getting there, but when enough of us become determined to solve a problem, something we never thought possible, becomes possible. Even normal.

The current pandemic and public outcry against racism has shown us that once leaders and institutions take a stand, once the public takes a stand, people with good ideas and solutions, who have been doing the thinking and the work in the background can step into visibility. Excuses to keep the status quo appear shallow and stale. We can collectively get to somewhere better than before. But we have to be honest with ourselves and each other for it to last.

Can we do that?

Most companies and institutions have ethical guidelines, best practices, and now AI principles. But we don’t really expect them to meet them. We know the difference between PR, spin, and reality.

What if it was normal to align our actions to the value systems we advertise? As we are starting to do with our biases and racism right now? And need to keep doing even after our collective attention moves elsewhere. Start with listening to people who have been thinking about this challenge in a complex and multidisciplinary context for a long time. And are likely already working at our companies. Understand what has worked and hasn’t worked. Be honest about where we are individually and collectively. And shift from our different starting places. From our here and our now. As we know from life, design, and engineering, everything is ultimately a navigation problem. Sometimes it’s a simple step that gets us going in the right direction. Get beyond talking to doing. For trust and AI, could we start with the simple step of integrating trust into AI design instead of considering it optional? And now instead of waiting for regulations later?

After all, do we wait for regulations to innovate?

This article is part of Trust in AI series for The Responsible Innovation Project. Exploring the impact of innovation and AI on the way we live, learn and work.