At a recent panel discussion about the potential role of generative AI in government service delivery, one of speakers made a somewhat unexpected observation:
“The abilities of the various generative AI models are converging and will soon become commoditized. The real value in AI going forward will be in getting government data organized, cleaned up, refined, and prepared for use by AI tools.”
This is an important observation because as the landscape of generative AI continues to rapidly change, and as rules for how agencies use this technology evolve, governments can avoid trying to pick a winner in a rapidly evolving marketplace. Instead, they can prepare to reap the benefits of generative AI by investing in their own data.
Listening to this, I couldn’t help but think about prior discussions I had with people inside and outside of government almost 10 years ago, at the crest of the government open data movement. If governments could just get their data organized, standardized, and refined — so the thinking went — they could reap all of the many benefits of sharing this data internally among agencies, and externally with innovators and new collaborators.
Similarly, I recall subsequent discussions from the mid- to late-2010’s about the advent of new data analytics and business intelligence tools that were becoming more widely available. These tools offered the promise of new insights to improve how government agencies operate. Access to comprehensive, well-documented government data was viewed as a key ingredient in the successful application of these new tools.
So when we hear industry observers and AI experts tell us today that governments can reap the benefits of the latest wave of AI innovation if they organize and prepare their data, they are right — but this is not a new insight. This has always been true. And it’s also true that data will be foundational to benefiting from the wonders of the next innovation cycle in the technology industry, whatever that ends up producing.
Getting government data ready for use by AI tools (and by future innovative tools) can be a challenging undertaking. This data is often dispersed across different legacy systems, can be poorly documented, can be difficult to access and consolidate, and may have restrictions that impact its dissemination and use.
Here are some steps that government agencies can take right now to get their data ready for the AI revolution that is taking place:
Develop a Data Strategy
- A well-crafted strategy is a good way to organize and prioritize efforts to collect, document and enrich government data, and can be an effective way of communicating the benefits and limitations of these efforts to a diverse set of stakeholders.
- Don’t let the perfect be the enemy of the good. Take an iterative approach to developing and improving your agency’s data strategy. Start small, iterate often.
- Clearly identify your agency’s business goals and objectives in your data strategy. It is important to let both data users and data stewards know why this work is important.
Focus on Important Data
- It can be tempting to try to catalog every data set that an agency owns, but this can be a daunting task that saps both energy and resources.
- Instead, focus efforts narrowly on those data sets that are specifically tied to the business objectives articulated in your data strategy.
- A narrower focus can also allow you to develop a repeatable process for identifying, assessing, documenting and enriching new data sets as they are identified or become available. Remember, start small and iterate often.
We don’t know what the next technology innovation will look like, or what specific benefits it will bring to government agencies, but we can say with some confidence that it will probably depend on high-quality government data. Those agencies that take steps now to get their data ready will be in the best position to reap the rewards of these new technologies when they mature.
Mark Headd is a Government Technology SME at Ad Hoc. He is the former Chief Data Officer for Philadelphia, serving as one of the first municipal chief data officers in the United States. He holds a Master’s Degree in Public Administration from the Maxwell School at Syracuse University, and is a former adjunct instructor at the University of Delaware’s Institute for Public Administration. He spent 6 years with the General Service Administration’s Technology Transformation Services (TTS), serving on the leadership team for 18F and leading customer success efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.
Leave a Reply
You must be logged in to post a comment.