This post was originally published on the Intellitics blog: Ten Things to Monitor As Agencies Invite Input On Open Government Plans
Now that a whole lot of agency.gov/open websites are live and many agencies have indeed set up a ”mechanism for the public to […] [p]rovide input on the agency’s Open Government Plan” it’s time to figure out what to watch out for over the coming weeks and months in order to evaluate the success of these initiatives.
As I noted back in January, my hope is that these new projects will address and improve upon three key issues that we saw during last year’s Open Government Dialogue (namely, lack of convener involvement, insufficient moderation, herding).
All in all, I’ll keep an eye on the following (in no particular order):
- Expectation management: Is the agency clear about the scope of their participation initiative and their promise to the public? Do participants know what impact they can reasonably expect and when?
- Community ground rules: Every agency should have these “rules of engagement” in place and be ready to enforce them if needed. Bonus points for friendly, easy-to-understand language!
- Level of convener involvement/participation: Does the agency become actively engaged in the discussions?
- Quality of moderation: Will the agency manage to keep discussions on topic and moderate distractions in a fair but timely manner?
- Quantity of participation over time: How many participants will sign up? How much content will they produce? (luckily, IdeaScale exposes a few basic metrics in real time, such as number of ideas, comments, votes and registered users)
- Outreach and diversity of participants: Does the agency manage to attract a broad range of participants from various backgrounds? Or do usual suspects dominate the discussions?
- Conclusion and impact: This one will be especially interesting as there doesn’t seem to be an end date defined for any of these initiatives. In case of ongoing participation programs, does the agency at least share interim results?
- Tech support: Does the agency address technical support questions and resolve any issues in a timely manner?
- Project communications: Does the agency offer ways for participants to stay in the loop (or get up to speed quickly) with regard to current state of the discussion, frequently asked questions, highlights, interim results, next steps etc.?
- Mood: Overall, how happy is everyone with the process? What’s the energy level? Are things productive? Etc.
What else should be on the radar? Sound off in the comments.
It’s worth noting that these items are almost entirely independent of the technology that’s being used.
I think a big one is “driving audience.” There is a little of a build it and they will come…which I don’t think is necessarily true.
Agencies should make sure to promote /open dialogues heavily via already built-in audiences (email lists, partner lists) and throughout other channels (prominent web placement, reach out to traditional media, leadership discussion at in-person meetings).
One element I would want to measure is how well Open Government Plans integrate agency missions and programs. Specifically, I would look for leadership and involvement by core program offices that have the critical interactions with the public and agency stakeholders. If the Plan is developed and supported soley by the CIO I believe it will result in limited long-term change in how agencies conduct business.
@David
Excellent point. That’s probably the ultimate litmus test and goes back to what impact exactly any of these initiatives will have. If the key stakeholders aren’t on board (and agency leadership is one of those stakeholder groups) impact will be marginal.
Tim these are great things to monitor. I suggest close weekly monitoring participation on the input. I have noticed one or two websites that clearly are simply checking a box to meet a deadline and not aligning the OGD with their core mission. So setting benchmarks is an essential contribution to any assessment of OGD progress.
Nice, someone actually re-posted this list to the OpenState forum: http://openstate.ideascale.com/a/dtd/19960-7038
I’m trying to at least scan the activity in the forums on a daily basis. Looks like some agencies are doing a much better job at moderation and community management than what we saw during phase 1 of the Open Government Dialogue.
I’m collecting screenshots on Flickr here: http://www.flickr.com/photos/planspark/sets/72157623377063588/
Good work monitoring, Tim. Good blog post too. Good to see you in this community. Building off Mr. Kuehn’s remarks, I think it would be interesting if agencies would begin the following steps to improve quality of process:
1. Synthesize ideas from brainstorming, but targeted or categorized under various questions or problems per transparency, collaboration, participation.
2. Be action and problem-solving oriented. Connect people’s ideas–proactively.
3. Correlate ideas toward agency missions and goals.
4. Demonstrate the effect of people’s input–continually, not just at the end of three months.
5. Merge ideascales, both from former effort last summer 2009 to this year.
6. Feedback loops from new synthesis of knowledge
These are six steps I would add to your 10 above. Steve’s right too: Agencies need to ‘drive’ audiences, but they also need to cultivate online community and interaction of people’s ideas. I would invite agencies to ‘build’ audiences too. Kuehn’s right too: CIO’s perspective cannot be the only one to achieve long-term success; we need the cultural piece as well.
One of the most fundamental pieces of success overlooked is both new web 2.0 technologies and softwares that really build knowledge cumulatively, usually with some manual operation, naturally. Effective problem-solving, whether short or long term for at least non-rountine problems, requires synthesis of input into re-emerging presentations for public feedback. That’s a missing link in the entire online process, whether IdeaScale or otherwise. Those are some initial thoughts…always thinking.
Summarizing and synthesizing the conversations, ideally on a daily basis, and feeding the results back to the participants is one of the biggest opportunities for improvement for these kinds of idea collection and discussion efforts. It’s a manual process, of course, and may require skilled facilitators but the benefits to the participants are obvious.
I address this challenge in more detail here: 14 Ways to Make Online Citizen Participation Work: “Keep Folks in the Loop!”
We saw a little bit of that during phase 2 of the Open Government Dialogue: each new topic was kicked off with a blog post that summarized the previous one.