On the Road to Responsible AI

AI has come a long way in the past few years, and agencies have learned a lot about best practices. Where once they were issuing blanket bans of GenAI tools such as ChatGPT, they’re now finding safe ways to adapt the technology to their needs and reaping the benefits.

The importance of using AI responsibly can’t be understated. To quote Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, “Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

The good news is there’s a growing body of guidance on how to meet these goals, starting with the EO and OMB’s implementation memo, M-24-10. Even better, a consensus is emerging in both the public and private sectors on what responsible AI requires.

What Is Responsible AI?

We hear a lot about “trustworthy” and “ethical” AI. For example, the National Institute of Standards and Technology defines trustworthy AI as: “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with harmful bias managed.”

But Cansu Canca, Director of Responsible AI Practice at the Northeastern University Institute for Experiential AI, says the term “responsible AI” is more accurate. “In a way, ‘responsible AI’ is a shorthand for responsible development and use of AI,” Canca said. ‘Trustworthy AI’ seems to direct the attention to the end goal of creating trust in the user. [That] circumvents the hard work of integrating ethics into the development and deployment of AI systems.”

Here are three key requirements for achieving responsible AI.

Control the Data

People continue to revel in — and repeat — stories about GenAI’s factual mistakes and absurd outputs. When an AI query yields advice to eat a rock every day, it raises skepticism about the whole technology.

Agencies have found that improving accuracy depends on limiting AI’s frame of reference to accurate information relevant to task. This means moving away from publicly accessible LLMs and bringing the applications in-house, using “specialized” AI, trained on an agency’s own data. Vendors have expanded and simplified their offerings for this; it’s not something agencies must develop themselves. But they will have to step up their data governance to make sure AI draws on information that’s error-free, current, and appropriate to the task.

Dispel the Myth of the ‘Black Box’

The terms “transparency and explainability” may have seemed mysterious a few years ago; now that AI is more familiar, they’re easier to understand. It turns out you don’t need to know how to write an algorithm to explain what data the AI accesses or what you’ve asked it to do. AI isn’t really a “black box” whose outputs are inscrutable.

But agencies need to be proactive in explaining the limitations of AI-generated results, just as they would any other analytics. The most basic level of transparency is one that might be easy to overlook: telling users where AI is being used in products and services.

It also requires monitoring outputs to see where they may fall short, such as by reflecting bias in the data, and being up-front about the issues. The more you know about what goes into the AI, the more likely you will be to head off problems — and to mitigate them when they occur.

Don’t Forget Privacy and Security

Government agencies collect a lot of data about the people they serve — much of it confidential or personally identifiable information (PII). As they apply AI to this data, they need to install guardrails that keep that information from leaking out. The technology’s ability to search and link data is useful in streamlining services, but an AI model trained on data that includes PII or other sensitive information can inadvertently reveal it in answer to a query.

Increasingly, AI tools can be trained to recognize PII and confidential information and block it from specific applications. But you can’t rely on AI to monitor itself. Before implementing any public facing AI, agencies must be sure it doesn’t tell more than it should.

‘Responsible’ Is More Than Avoiding Risk

Although agencies often emphasize the “responsible” in “responsible use,” part of that responsibility is to use AI. Its ability to summarize, synthesize and organize vast amounts of data has demonstrated great potential for enhancing cybersecurity, chatbots, call centers and data analysis, among other uses. With these capabilities, it would be irresponsible not to take AI out for a spin.

This article appeared in our guide, “AI: Where We Are, Where We’re Going.” To see more about how agencies are adapting to AI, download it here:

Original art by Calista Lam and Andrew Blake for GovLoop

Leave a Comment

Leave a comment

Leave a Reply