This article is an excerpt from GovLoop’s recent guide, “Going Digital: Your Guide to Becoming a Modern Government.” Download the full guide here for case studies, government interviews and best practices for going digital at all levels of government.
For government agencies, there is arguably as much intrigue as there is angst about using artificial intelligence (AI).
On one hand, AI is dramatically changing the way government operates. For example, law enforcement officials are using AI to fight crime, while others use it to conduct geospatial reconnaissance. What makes AI such a sought-after tool is that it can augment human expertise and detect trends within petabytes of data. That said, agencies are keenly aware of the potential dangers of completely removing humans from the loop when drawing conclusions.
Along those lines, there are real concerns about the proprietary nature of AI and the lack of clarity around how algorithms and data are used to inform human decision-making. “We’re still scratching the surface on some of these “Black Mirror”-esque questions,” said Chris Sexsmith, Solutions Sales Specialist and Cloud Practice Lead for the Public Sector at Red Hat, a leader in open source technology. “Black Mirror” is a sci-fi series that explores a twisted, high-tech future in which humanity must contend with the unanticipated consequences of new technologies.
Federal agencies are facing a similar dilemma. They must weigh the benefits and consequences of AI adoption, especially as the technology is used to help make high-stakes decisions. In a recent interview with GovLoop, Sexsmith emphasized the importance of agencies incorporating open source in their AI and machine learning strategies to ensure greater transparency. He also explained that Red Hat is playing a critical role in promoting collaborative and open development of AI tools.
One example of that work is a project called Open Data Hub that runs on Red Hat’s OpenShift platform. Open Data Hub is an ecosystem of projects meant to drive community involvement in AI and machine learning while simultaneously providing enterprise solutions derived from open source. “There are big concerns that are generally alleviated when we move to a fully open source model,” Sexsmith said. “Open Data Hub provides that level of clarity and control, without forcing agencies to adopt our methodologies.”
In addition to addressing concerns about AI transparency, agencies must also ensure that they have the ability to sift through influxes of data to determine what’s important. This is especially true in the cybersecurity space. Legacy systems send agencies dozens or hundreds of incident alerts, and most people become immune to them or turn them off. To address that, AIOps is a smart platform focused on distinguishing the signal from the noise and providing relevant information to act on, Sexsmith said.
For now, many agencies are reticent to go full speed with AI because rules and policies around the technology are still being developed at the White House and agency levels. There are ethical implications of how data is fed into AI systems, accumulated, distributed and processed.
“By definition, a proprietary system is not transparent or open, making it a potential factor of concern for agencies that require a full understanding of the data, the system and how conclusions are formed,” Sexsmith said. “That’s why open source is absolutely critical for any agency looking at AI and machine learning.”
Takeaway: Open source should be a key part of your agency’s AI and machine learning strategy to promote greater transparency and understanding around algorithmic decision-making.
Photo Credit: Matteo Kutufa on Unsplash
Leave a Reply
You must be logged in to post a comment.