The U.S. government routinely collects large amounts of data from various sources at home and abroad. However, the bottlenecks created by legacy IT systems and processes combined with challenges related to agency culture and managing vast quantities of data have limited the government’s ability to accelerate digital transformation, which remains a top priority for many federal executives. But with the right approach, government agencies and federal data teams can harness the power of modern data operations (DataOps) to expedite digital transformation, protect sensitive user data, remove data silos and build more agile data workflows at scale.
DataOps: A Rising Trend in Federal Data Access, Privacy and Governance
DataOps must be considered a key asset for building more controllable data workflows and accelerating digital transformation in government. From eliminating bottlenecks to increasing data productivity using automation, adopting a DataOps approach will put government agencies on the path to creating a more data-driven culture.
One principle that federal executives should focus on is using automation to simplify data access while enforcing data privacy, security and governance policies across their robust data ecosystems. Federal data analysts, business users, data scientists and engineers need automation using a self-service data model to consume data faster and for it to be easily shared, while also ensuring compliance. And with raw data, federal data teams must curate and manage the data through its lifecycle from raw to processed in order to derive new insights. In fact, on average, 80% of employee time in any data analytics project is spent preparing the data for downstream analysis, with the largest hurdle being data provisioning — the process of users requesting data and having it provided by data owners.
This is time-consuming and can be solved by automating data access, eliminating significant delays and expediting analytic-driven outcomes. By employing automation across the DataOps ecosystem, a balance can be struck between the needs for security and providing the right utility for data professionals. This only works when you automate the enforcement of data, privacy and governance policies along with data access controls.
In addition, federal data teams managing software need a continuous integration and continuous deployment (CI/CD) pipeline to test and deploy code, which is also driving the need for faster data analytics. This is why agencies should automate their data policies at the start of any project using DataOps.
4 Principles to Start a Federal DataOps Adoption Journey
Enabling federal digital transformation and adapting to the DataOps paradigm shift can be challenging. For government CIOs and data teams who want easily discoverable and secure data as well as data governance policies that can be easily applied, there are various principles they should follow to begin their DataOps journey:
- Firstly, federal data teams should focus on building open systems and leverage open APIs, using standards such as Restful or Graph QL. These Open API architectures in application stacks within government agencies are critical.
- As agencies transition to the cloud, they must make compute and storage a commodity. In addition, adopting a multi-cloud approach will reduce risks related to vendor lock, but it needs to be balanced to reduce the risk of data leakage/loss, while improving data security and expediting DataOps adoption. It will ensure flexibility in future architectures while accelerating technology acquisitions.
- Federal data teams should also adopt emerging data architectural concepts, such as data mesh. Data mesh uses existing architectures for places where data exists and combines them in a loosely-coupled domain-driven design that enables existing investments in data management, processing and analytics capabilities to be integrated into an overarching model that will help eliminate data silos while empowering analytics users through self-service.
- Finally, automating data policies for data access, privacy, security and governance will be critical for agencies planning to migrate to a DataOps architecture. Without ensuring policies are enforced through automation, key operational bottlenecks will never be removed.
Shifting to DataOps: Overcoming Roadblocks
Federal data teams should focus on simplifying and separating their policy from their platforms, which gives them the ability to build and manage policy in a centralized way and then distribute that policy into their agency’s analytics architectures — with full auditing capabilities. This gives federal data teams the ability to automate policies while also auditing them as they are being applied, creating operational impact for agencies and a foundation to build government data architectures.
Data analytics teams will need faster access to data to do their analytical work, particularly data scientists who build and test models or federal employees in an operational capacity who must understand data trends and analysis in a dashboard. However, agencies have spent a long time trying to catalog data, which leads to a limited return on the time dedicated to that objective. Moreover, data catalogs without deep granular control and automation of policies are often just glorified lists. Agencies should augment investments in data catalogs by automating the processes that those catalogs document.
The Future of DataOps in Government
As agencies modernize their data environments, government CIOs should continue to build open systems designs, which are key to collaboration, and select the tools and service providers that support agency data strategies and compliance requirements. Additionally, government CIOs should focus on collaborating with industry, which will fuel digital transformation — a priority in recent Presidential Executive Orders and agency IT modernization plans.
Adopting a DataOps approach will allow federal data teams to increase ROI on data initiatives, including doing their work faster, improving data access and removing bottlenecks for data-driven outcomes. And with the use of automation, federal data teams will be able to quickly and efficiently adapt to this paradigm shift in data operations across their agency to achieve mission goals.
Brian Shealey is VP of Public Sector at Immuta. As part of Immuta’s leadership team, Brian works to expand Immuta’s footprint across the U.S. government, working to grow sales, develop new partnerships, improve marketing efforts, and develop go-to-market functions.
Leave a Reply
You must be logged in to post a comment.