GovLoop

How Do You Know You Can Trust AI?

One of the big concerns around AI is what’s called explainability.

Whether we are talking about a system that screens applicants for housing loans, recommends a course of medical treatment or generates a legal document, we want to know how that system arrives at a decision or generates output. Otherwise, we are not likely to trust it.

Experts at the National Institute of Standards and Technology (NIST) have identified four principles of explainable AI:

Explanation. The system should supply evidence, support or reasoning behind the outcome and/or the processes involved. For example, in the case of a health application, there should be no doubt about why it recommended a particular course of treatment.

Meaningful. For an explanation to carry any weight, the intended audience must understand it based on their knowledge, experience and interests. With the health application, an explanation that is meaningful to a medical professional might mystify a patient.

Accuracy. A meaningful explanation about the outcome or processes won’t be helpful if it’s inaccurate. For example, you wouldn’t want the explanation for selecting a given course of treatment to be oversimplified to the point of being wrong.

Knowledge limits. Most AI applications qualify as narrow AI — that is, they were designed for a specific set of use cases in a particular field of knowledge. Just as you wouldn’t ask a dentist about a heart problem, you should not push the limits of an application. As obvious as that sounds, “this practice safeguards answers so that a judgment is not provided when it may be inappropriate to do so,” NIST scientists wrote.

Explainability is part of a larger concern around the trustworthiness of AI. According to NIST, other attributes of trustworthy AI include:

How Could AI Change Things for the Better?

The advantages of AI vary widely depending on the use case. But experts usually highlight these general benefits:

For a more detailed discussion of the benefits, check out this recent article in Forbes Advisor.

How Could AI Change Things for the Worse?

Imagine someone inventing a new high-powered car and putting it on the road before developing an adequate braking system.

That’s essentially what’s happening with AI, according to a recent open letter written by the Future of Life Institute and signed by more than 31,000 people, including some high-profile technology executives.

For example, AI systems could write their own code, cutting humans (and human judgment) out of the loop, according to the group.

AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter states.

Here are some of the scenarios that experts and observers worry most about, as highlighted in Stanford University’s One Hundred Year Study on Artificial Intelligence:

Another more immediate threat is the loss of jobs. All the general benefits of AI noted earlier will augment the work of some humans but might replace others.

Chatbots “cannot yet duplicate the work of lawyers, accountants or doctors,” noted a recent article in The New York Times. “But they could replace paralegals, personal assistants and translators.”


This article appears in our new guide “AI: A Crash Course.” To read more about how AI can (and will) change your work, download it here:

Image by Gerd Altmann from Pixabay
Exit mobile version