There are several quotes and memes circulating social media about the effectiveness of government. One quote I saw recently is attributed to Ronald Reagan – “We should measure welfare’s success by how many people leave welfare, not by how many people are added.” This got me thinking, not about welfare or politics, but why it is so difficult to accurately track and monitor the effectiveness of government programs. There are numerous metrics on program budget and enrollment but it is very difficult to identify what drives the changes in these metrics. Is it the economy? Is it changes in policy? Is it due to the efficiency (or inefficiency) of the government itself?
Why is it so difficult to quantify the benefits of these programs?
The challenge is data, but it’s not our ability to access relevant data as you may think. The amount of information the government tracks is staggering and there are numerous data transparency initiatives making this information more and more accessible. The real challenge is in our ability to understand how the data is defined, how it is used and how the individual data elements are related to data throughout the government’s systems. For example, the amount of money spent on social programs and the number of people served is readily available. This information is often used to assess program effectiveness. However, additional information on income, debt, education, and countless other datasets are maintained outside of these individual programs. Leveraging the data the government maintains across the disparate programs provides the facts required to actually quantify their effectiveness – but this is not possible until we can define, manage, and understand the cross program and interagency data relationships.
The benefits from defining and managing the relationship of the data across government programs goes far beyond quantifying program effectiveness. It provides insights into how to make the government more efficient while reducing fraud, waste and abuse. For example, we can leverage datasets across the government to understand things like how many people reenter social programs and why, and the environmental and demographic conditions that contribute to these changes. Understanding the conditions that contribute to people requiring services provides the facts needed to address the root cause and make the programs more effective. Cross program and interagency data can also be leveraged to identify potential waste, fraud and abuse by documenting discrepancies in demographic data and identifying redundant or overlapping services across programs.
It is estimated that 90% of business outcomes are caused by ~1% of the data we use. The primary difficulty with standardizing data and defining data relationships across government programs is identifying the 1% of the relevant data within an innumerable volume of datasets. Not surprisingly, there are no credible estimates on the volume of data maintained by the government. This is understandable considering the fact that the U.S. Federal Government represents an estimated one-fifth of the gross domestic product (GDP) and is the largest, most complex organization in the world.
How do we identify what data relationships need to be maintained? The answer is twofold.
- Methodology: Considering the width and depth of the data, using the typical bottom-up approach for performing a technical analysis on the data alone is not The methodology must provide a process to first identify what data is relevant.
- Information Management Framework: A framework is needed to provide the capability to support the methodology and maintain the required goals, metrics, processes and the relationships to the relevant data.
So where do you start?
Identify relevant data. To identify the relevant data, we must first take a top-down approach. This consists of documenting the strategic goals, as well as the objectives required to meet those goals. Once the goals and objectives are established we then define the relationship between the goals and objectives and the relevant datasets across the enterprise. To measure a goal, we must also identify the metrics required, and align those metrics to the data elements that will be measured. By using this top-down approach we can efficiently and quickly identify the cross program and interagency data that is required to meet and measure our goals. All of these definitions and relationships must be documented and maintained in the information management framework.
Document the data. Once we have identified what data is relevant, it is time to use the typical bottom-up approach for defining and documenting the following:
- Data dictionary elements
- Relationships to relevant data standards.
Note that the documentation about rules encompasses how the data is stored and maintained as well as the technical information for relating disparate data elements across the various programs and systems.
Adopt the right technology. A critical component in this process is the information management framework. Technologies such as Microsoft Excel and SharePoint alone are not sufficient to manage the dynamic relationships across the vast number of datasets that must be considered. Also, these technologies do not fully support the alignment of the goals, objectives, rules, dictionaries, metrics and standards that is required for both a top-down and bottom-up approach. To be successful a purpose-built solution such as DATUM’s Information Value Management® platform is required.
There are numerous data transparency and standardization initiatives in place throughout the U.S. federal, state and local governments. By applying the right methodology and a solution like DATUM’s we can now leverage these datasets, align them to internal systems and use fact-based analysis to quantify, improve and streamline government programs far beyond just dollars spent and people served.
Interested in discovering more? Contact us to find out how DATUM can help you leverage your data to realize greater analytical insights, simplify reporting and compliance, and sharpen operational excellence.
 Estimate derived from DATUMs research and statistics collected over hundreds of customer engagements.