GovLoop

AI Practices for Better CX

When AI implementation focuses on users, whether consumers or employees, it promotes trust and facilitates better agency services. In Vermont, the first state to establish a division dedicated to artificial intelligence ethics and implementation, Josiah Raiche, Chief Data and AI Officer, has been working to incorporate AI into both internal and external-facing processes. The goals are, through ethical practices, to improve services for citizens and help employees work with greater ease and efficiency. Raiche shared some tips from his experience using AI to ultimately improve customer experience, or CX.

Lead With Honest, User-Centric Principles

Raiche said you can reduce friction surrounding the technology by building trust with constituents. In his work, that means being honest and straightforward when his agency uses AI. Last year, Vermont released its Artificial Intelligence Code of Ethics, which provides operational guidance, standards for data-labeling in automated systems and citation standards when generative AI is used. One ethical practice: Let a user know when material has been created with more than 20% AI assistance. “I think that people who are kind of just starting this journey should really think about the underlying principles, their values and how they’re going to express the culture of their organization through the use of AI,” said Raiche. Vermont’s Guidelines for Use of Content Generating AI includes an easy-to-read chart about how much and when to use AI in content generation and editing. Raiche said the chart (excerpted below) has been useful for Vermont agencies, and for agencies in other states.

Give Employees Agency

“We need to use AI to make work better, not worse,” said Raiche. AI and automation should not control what we’re doing. It should help us do it better. “If you’re using AI in ways that take away the creativity and the agency of the employees, you’re really kind of turning them into kind of the living appendages of the machine in some way,” Raiche said. “But we also have the opportunity to use AI to give people superpowers,” he added. Raiche advocates for developing AI so that it gives employees newfound abilities and expression — “so that the people are the boss, not the servants,” he said. Teach Users to Use Context AI technology is learning contexts, the many facts and circumstances that surround our data. Everyone who uses or designs AI should remember that we have spent our lives perceiving and applying contexts, and that some are unique. So, when AI answers questions, it may be unable to recognize the correct context if a user isn’t explicit. Raiche said that in Vermont, one big focus is developing context awareness among AI users. If they get in the habit of stating contexts, that could be a critical way to make AI processes more functional.

You Can Catch Up

If you haven’t been working with AI yet, you may feel that your AI education and adoption is lagging. But Raiche said you’re not behind, and there is still time to build AI into your work. “This is a thing that you can figure out. You don’t have to be a super nerd to figure this out. And your employees are already using this,” he said. “So, tap into that.” Raiche suggested crowdsourcing among employees for ideas about incorporating AI, because many will be familiar with it, even if it’s outside their agency work. As Raiche has talked with other states, he’s found that more experience with AI means less fear and a greater sense of control and competency. Where there is fear, he suggests putting risk mitigation strategies in place and then experimenting with AI.

This article appeared in our guide, “Building Trust With Tech In State and Local Government.” To see more about how agencies are using technology to build relationships with constituents, download it here:

Charts by Marc Tom and Julia Blurton-Jones for GovLoop; photo from Pixabay
Exit mobile version