Getting used to AI can be an uncertain journey.
Will it raise the quality and speed of your output? Will it expand the scope of your job, or put it at risk?
A 2022 joint study from the White House and the European Commission recognizes the benefits of AI, but also the need for workforce adjustment.
“The impact of technological progress, including AI, on work is characterized by competing forces of automation and augmentation of worker tasks,” the report states. “The focus of researchers — as well as managers, entrepreneurs, and policymakers — should therefore be not only on AI’s automation or augmentation potential but also on job redesign.”
Krista Kinnard, Director of Innovation and Engineering at the Department of Labor (DOL) and former Director of the General Services Administration’s AI Center of Excellence, helps DOL reimagine ways to accomplish tasks. She looks at private-sector technology to find possible solutions to department challenges, and AI is a big part of that.
During the past three years, Kinnard has developed an AI use case inventory, highlighting 18 ways AI is used in the department.
Her team also manages AI platforms, monitoring who uses them and what data flows through them. They work with agency partners to assess AI tools regarding bias mitigation, fairness, transparency, accountability and privacy.
Let’s Put Humans First
People face AI with both excitement and fear.
“I think both are warranted,” said Kinnard.
To take the edge off, she prioritizes human-centered design and building an AI learning community. At DOL, “no one is looking at AI to replace workers or to take full tasks off their plate,” she said.
Employees are very involved in testing new AI applications to see how they fit the day-to-day workflow. Kinnard and her colleagues document issues that come up during the testing to help troubleshoot.
“I think it’s just iteration, really, that gets us to an educated user base,” she said.
Kinnard advises choosing AI that empowers people, with input from the employees who will work with it.
Think About Your Data
Most of us have been blown away by ChatGPT and other generative AI technologies — and also by some of the mistakes they make, said Kinnard.
To avoid big problems, “you have to remember that an AI model is only the product of the data that it is trained on,” she said. “So, if there’s any bias or untruth in that data, it is going to be reflected in the performance of your AI tool.”
Is My AI a Good Person?
AI doesn’t come with agency-centered values. Models must be trained to make decisions and in what data means in context.
For managers, Kinnard says it’s critical to understand the ethical issues that may complicate AI use.
Hiring algorithms are an example: AI can pick up key words that align with a position. But what happens when there’s atypical work experience or a candidate from somewhere with different terms? AI may pass over them, but those characteristics would make the candidate a stand-out for a human reviewer. Or, what if the AI slots a candidate into a certain job based mainly on demographics?
Kinnard said that from a risk perspective, all uses of AI should be examined for what values they support and what bias they could perpetuate.
Bot on the Trail
As employees and managers incorporate AI into their work, evaluation should be continuous.
First and foremost, AI is a tool.
“As exciting as an AI opportunity may be, our first assessment should always be ‘What are we trying to do?’ What is the problem we are trying to solve? How will this tool solve that problem?’” said Kinnard.
Although new AI users are embarking on a journey without a map, seeing the connections between these tools and the problems they help solve can help chart a clear path forward.
This article appears in our new guide “AI: A Crash Course.” To read more about how AI can (and will) change your work, download it here: