In my previous post I outlined a 4 step process for a successful performance management program. to recap:
1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.
I’ll focus on the first step in this blog post as part of the overall attempt to develop a performance management lifecycle outline. Most of the set up work in a successful performance management program will be in this area. Each metric should have a well defined set of counting rules, methodology for collecting data, and a reporting period. Other attributes such as priority, stakeholders, etc may also be important, but the core is in the definition, methodology, and reporting period. Depending on the type of agency, there are a number of pre-defined definitions and counting rules (for an example, see this previous post) so no need to re-invent the wheel if those measures are agreeable. Data systems will vary by jurisdictions but the methodology will depend on their ability to generate performance data.
Once the difficult part of defining the metric and its data is complete, requiring managers to report their data on a regular time frame is essential to a successful program. The time frame should be regular and the metrics required should not change often. Getting managers to buy into the program will depend on the level of effort and predictability in each reporting period. If they are responsible for a large number of metrics then a less frequent reporting period is useful (annually or semi-annually). The trade off is slow feedback when metrics take a turn for the worse or when any new initiatives are launched. For less measures, more frequent reporting (monthly or quarterly) is helpful in root-cause analysis and less burdensome as well. The important thing to remember is that reporting performance data often takes time and resources and managers will grow resentful of heavy, frequent reporting requirements, particularly if the benefits of which are not apparent.
After data is reported it will often need some “scrubbing” for any errors prior to undergoing step #2 above, trend analysis. We’ll look at the specifics of that in a future post, but the important takeaway here is to make performance reporting well-defined and as simple as possible for relevant managers. This will help ensure that the agency has the performance metrics necessary to make data-driven decisions.
This was originally published on my blog at http://measuresmatter.blogspot.com/
Nice work on this Joseph – I enjoyed your blog and posted it to our Twitter page (GovPartner) as I think our followers will enjoy it as well.
Thanks!
Gabriela
Thanks for the clear outline. Sometimes I get caught up in the muddle of getting things done and forget to go back to basics. I appreciate the reminder.
Thanks, hopefully this helps keep other managers on track when trying to determine the effectiveness of their agencies. I’ll be fleshing out the rest of the outline in coming weeks so hopefully you can stop by for the rest or follow my blog at http://measuresmatter.blogspot.com/