In two previous posts (Brookings and Foresee), I have explored common methods for measuring websites in a Web 1.0 world in order to find applicability for Web 2.0. Essentially, I am providing a summary in order to educate myself and share that knowledge acquisition with personnel from government agencies and other organizations that are beginning to think about social media metrics and analytics. Each of these measurement methods are included in a presentation that Ari Herzog and I delivered at the “Social Media for Government” conference a few weeks ago.
Another way that an organization may analyze its website is by using the “Web Site Review Scorecard 7.0” developed by Forrester Research, Inc. As a quick aside, I owe Alan Webber, formerly a government website reviewer with Forrester who is launching his own venture (called Ronin Research Group), my appreciation for reviewing this summary of Forrester’s review methods.
Whereas ForeSee directly asks customers to complete a survey, Forrester engages an expert in an evaluation of a website using a scorecard review instrument. The expert assumes the role of the customer vs. being the customer him/herself. Technically, its a “heuristic evaluation,” which is a review of the user experience in light of acknowledged usability principles.
The scorecard begins by asking an organization to articulate their “evaluated user goals” to ground the review in a set of clearly defined outcomes. With this foundation, each review question is scored on a -2 to +2 scale:
• 2 = Strong Pass (Best practice)
• 1 = Pass (No problems found)
• -1 = Fail (One major problem or several minor problems
• -2 = Two or more major programs, or one major problem and several minor problems
The scorecard then asks several questions related to four elements. There are specific criterion under each question that results in a score. A score of 25 (+1 on each criteria) being a pass). The elements and some of their associated questions are found below:
Value
• Is essential content available where needed?
• Is essential function available where needed?
• Are essential content and function given priority in the display?
Navigation
• Do menu categories immediately expose or describe their subcategories?
• Is the wording in hyperlinks and controls clear and informative?
• Are keyword-based searches comprehensive and precise?
Presentation
• Does the site use graphics, icons, and symbols that are easy to understand?
• Do layouts use space effectively?
• Are interactive elements easily recognizable and behave as expected?
Trust
• Does the site present privacy and security policies in context?
• Does site functionality provide clear feedback in response to user actions?
• Does the site help users avoid and recover from errors?
The review scorecard is a bit more comprehensive, and presents yet another way of thinking about an organization’s Web presence. Per Forrester’s own description of the review, it “uncovers flaws that prevent users from accomplishing key goals on Web sites….To get the most out of the Web Site Review, site owners should identify user goals that drive business metrics, review their sites using the tools available on Forrester’s Web site, and fix usability problems identified in the review.”
So what are the implications and applications for Government and Web 2.0?
A. Agencies may consider gaining a combination of feedback from both end users (ForeSee) and experts (Forrester). Alan suggested that “agencies should use multiple paths, including detailed analytics, usability reviews, user feedback” – much of which can be done internally.
B. Forrester has a Blog Review tool that is accessible to their clients. Agencies who happen to be current Forrester clients may want to examine this review tool and conduct their own analysis if they currently use blogs to communicate with citizens.
C. Consider how some of the questions above apply to social media in evaluating the placement of social media tools on your agency’s website. Under “Value”: Where should you place a video or RSS stream in light of its relative importance to your goals and objectives in communicating with constituents? Under “Presentation”: Are you using only the icon of a Delicious or Twitter link rather than spelling out a description of that tool? We cannot assume that our end users know how to navigate a page adeptly or understand what these icons represent.
D. Start with the end in mind. Once your agency has decided to launch a social media tool, use the four Forrester elements and ask again: “How does the use of this tool on our website or elsewhere on the Web connect to our mission, goals and objectives?” That’s where Forrester starts its evaluation – with the “why”. So should you.
Thanks Andrew. This is valuable information. From what I’ve seen, the discussions around Web 2.0 mostly centers around social media tools and organizational management. The inclusion of Navigation and Presentation give emphasis on how the application is built, a very important component of success since a true Enterprise Web 2.0 app is really about empowering user collaboration. The app should not require much training. Users are encourage to explore the app and functions should be intuitive. According to Andrew McAfee, the Harvard professor who is on the forefront of the movement, Enterprise Web 2.0 app should have a high trust in user control, which ultimately spark social unintended uses…for good, not evil.
Phong – these are excellent points that provide additional insight into the value of this kind of evaluation. The key is to have agencies think about these issues on the front end…versus being evaluated on the back end and making (more costly?) changes later. Your comments highlight the need to have several stakeholders involved in the process of creation – not just the IT person or web manager or the program officer. I always encourage agencies to ask “why?” first….then “who?” The “why” links everything back to the mission, goals and objectives…and the “who” establishes (up front) champion(s), contributors and citizens/customers/constituents. All of these folks should be involved…they’re the real experts in the process.
Very good points to consider. When do you begin to measure? During the pilot phase or after? Is there an assumption on how long the solution has been in use?
Very valuable fire hose of info on state of government adoption of social media from the perspective of measuring.
Given your thoroughness in this, I’m curious whether you have looked at use of social media among government and their employees (i.e. recruiting, branding, collaboration). I ask as I just finished our CedarCrestone 12th annual HR Systems Survey and found public sector a bit of a laggard and so I’m looking for examples of the “best of” in government. http://www.cedarcrestone.com/research