How Can Agencies Mitigate Risks of Using AI?
As artificial intelligence becomes more prevalent, governments are attempting to control its risks through internal guidance and legislation. Here’s where those efforts stand.
As artificial intelligence becomes more prevalent, governments are attempting to control its risks through internal guidance and legislation. Here’s where those efforts stand.
New employees face many onboarding challenges, including learning how to ask for and access agency information. Answer engine technology, though, is revolutionizing the process.
Across government, innovation is happening at the edge, leveraging cloud, artificial intelligence (AI), machine learning (ML) and related technologies.
AI technology has fascinating prospects for society, but there also exists a spectrum of potential negative and unconventional outcomes, including AI-driven phishing attacks.
The answer engine is a groundbreaking solution that harmonizes the power of large language models — such as ChatGPT — with the irreplaceable insights that knowledge management professionals can offer.
Getting used to AI can be an uncertain journey. Will it raise the quality and speed of your output? Will it expand the scope of your job, or put it at risk? Find out what an expert thinks.
If an agency wants to transform how it uses data, it needs to reimagine three core pillars of its data ecosystem: people, processes and technology.
For government positions, where objectivity is non-negotiable, cultural orientation and the integration of Artificial Intelligence (AI) can inadvertently taint the selection process.
This playbook explains how a modernized approach to observability can help troubleshoot and remediate IT issues faster.
This resource discusses the benefits of automation and best practices for implementing this technology.