Artificial intelligence (AI) is the new buzzword in government, driving conversations about IT modernization and workforce reskilling/upskilling. This emerging technology, which is difficult to even define, has immense potential for transforming the way government operates. But where do agencies stand with adopting this technology, and where do they want to end up?
A report from the Professional Services Council (PSC) Foundation defined AI as “the use of computers to mimic human cognitive functions” and stated that “at its core, AI is about automating and augmenting tasks that would normally require some degree of decision-making or intellectual capacity.” Instead of having employees spend energy on tasks that could be automated, agencies could use AI to advance their goals and use their employees’ time and strengths effectively.
AI is already being used in government. The Defense Department (DoD) for example, uses AI to perform predictive maintenance on vehicles and plan force deployments in times of crisis. The agency plans to build an AI capability to accelerate the security clearance process by checking security clearance applications against different data sources.
Agencies are also looking to AI to augment functionalities and further their missions. For example, the Agriculture Department (USDA) is looking to chatbots to supplement the human workforce that makes up its call centers. The National Institute of Standards and Technology (NIST) is exploring how AI can help with research into different facets of science and technology.
Even though interest in AI is high, the reality is that AI adoption is still in its initial stages. Agencies are identifying areas that could be improved with AI and deploying pilot solutions to test how operations are affected by technology.
One constraint on implementing AI is a lack of confidence in how the technology will hold up in high-stakes situations. There are also budgetary constraints around development, modernization, and enhancement expenses that necessitate well-defined, quantifiable returns on investment (ROIs). Agencies have to explore AI selectively because of these restrictions.
Agencies can’t just explore AI based on novelty alone; they have to figure out a pressing need or problem within the organization that has to be solved, and then work out how AI fits into a solution. AI can be a difficult technology to explain, and it poses a greater degree of risk. When an agency’s culture is risk-averse, leaders are faced with the challenge of offering strong leadership to reinforce changes.
“Many of the federal AI practitioners we interviewed agreed that having a champion for AI at a high level is perhaps the most critical ingredient to success,” the report reads.
Communication is the key to transforming culture. By encouraging openness with regards to the problems, solutions, progress, goals, and concerns around technology like AI, agency leaders can foster adoption.
Reskilling and upskilling go hand-in-hand with AI efforts because as people move to higher-value work, they must receive training and education as part of a smooth transition.
A key future consideration for AI use is the ethical component. When AI is used to help determine government benefits or conduct risk assessments, it’s reaching into the lives of everyday people and affecting them. Agencies have to be prepared to defend their use of AI to the citizens they serve.
Human bias can also seep into AI algorithms, oftentimes unknowingly. Diverse teams that focus on tasks across disciplines could help alleviate bias, and including ethicists could further ensure that.
The question now is not whether AI will be used in government, but when and to what extent the technology will be implemented. Is your agency thinking about AI? Share in the comments section below.