In my Web Manager University course, “Delivering Great Customer Service – Essentials for Government Web Managers,” I do a section on “metrics that matter.” I often start by asking folks if they collect performance data. Heads nod. Then I ask them what they do with it – how they use it. Most use stats to track page use, unique visitors. But if I ask them how they measure customer service…well, I usually get blank looks.
Gosh, I remember so well struggling to figure out how to measure the web’s impact on mission achievement and trying to decipher what all that data was telling me, when I managed HUD’s website. We got tons of statistics, spent a year using the American Customer Satisfaction Index (ACSI), had a contract with Nielson Netratings for awhile, and did a little usability testing. All right things. But how do you rack up all that data to find out if your website really is providing great customer service?
The truth is that many web teams haven’t identified specific customer service and mission completion objectives to measure. What to do? Step back. Pick just a few really important things to measure, pick just a few really good measures, and follow through. Use those metrics that matter.
The process is common sense.
- Start by identifying a handful – and I mean 10 or fewer – of key objectives for your website. At a minimum, 6 of those objectives should be the 6 governmentwide Customer Service objectives identified by the Federal Web Managers Council.
- For each objective, figure out 3-4 key performance indicators (KPIs) that will tell you whether or not you’re achieving the objective. Don’t get down in the weeds. Keep it simple.
- For each KPI, decide what data you need to collect. And I encourage folks to collect data in different ways, from different sources – statistics, usability testing, customer service surveys, writing quality reviews, etc. But don’t over-collect. Enough is as good as a feast.
- Collect and analyze the data to establish a performance baseline.
- Pinpoint places where you can improve customer service; and establish performance goals, like reducing time or improving the percentage of successes or reducing errors.
- Make incremental improvements, and collect and analyze data again. And again. And again.
- Report your findings to your web team, your bosses, your agency, and – in the spirit of transparency – the public. Explain the problems in terms of their impact on customer service. And let everyone know what you’re going to do to fix the problems. Let everyone know that you care about making your website as useful and usable as possible.
Sounds easy, doesn’t it? It’s not. It takes hard work to figure out those KPIs and make sure you get the right data to measure them. And it takes time and dedication to analyze the data, decide what you can do to improve, and make improvements. But gosh – this is so important. This is how you make your customer service the best that it can be.
Too often, web teams/agencies collect too much data. They get overwhelmed. They have trouble focusing on the biggest problems.
Or they don’t follow through. How many times do you think to yourself, “I know we’ve got a problem here – the data says it – but we don’t have the time or money to fix it right now.” So you keep collecting data that tells you the same thing. It’s a waste of time and money, not to mention a disservice to your customers.
Or they collect data first and then try to figure out what it measures. “Gee, we’ve got this great stats package giving us all this information. Hmm…what can we learn from it?” Doesn’t it make more sense to decide what you want to measure first and then find the data that will help you?
Or sometimes we think the data tells us something it doesn’t. For example, customer satisfaction surveys are important and are one great indicator about the effectiveness of your website. But they don’t give you facts about efficiency – they only report people’s feelings and perceptions about your site. I’ve heard people say, “oh, this site is so pretty and professional-looking. I love it.” But then you watch them use the site, and they have a hard time.
OK – so here’s an example of the way it should work. Let’s take FWMC Customer Service Objective #3: Customers should be able to complete common tasks efficiently. How do you measure that?
Well, I’d start with these three KPIs:
- Length of time it takes the average person to complete the task
- % of people who complete the task
- % of people who get the right answer
I’d measure those through:
- Usability testing: Did they finish the task? How long does it take? How many wrong turns did they take? How many clicks did it take? What words didn’t they understand? Did they come up with the right answer?
- Statistics: how many people come to the page to start the task? How many people visit each of the subsequent pages required to complete the task? What’s the drop-out rate?
- Plain language peer reviews: did each of the pages required to complete the task score well?
Then I’d corroborate with a customer satisfaction survey for each of those tasks. Did people think the task was easy to complete?
Are these perfect metrics? Probably not. Are they adequate? Yes. They’d give you a good start on figuring out where the problems are (too many clicks? Wrong words or terms? Bad layout or design?) and how to fix them (reduce steps, change words, use more white space or bullets or sub-heads).
And you can do this pretty efficiently. You can identify most of your usability problems by testing 3-5 users. Almost any users. It takes only about 10 minutes to do a plain language review of a web page – that includes both individual reviews and group consensus. And you know exactly which site traffic stats you need, so you don’t need to plough through that entire WebTrends report.
Don’t waste your time your time on data for the sake of data. Think about what’s important – what management purpose you’re trying to achieve. Focus on measuring customer service (starting with those 6 governmentwide objectives) and impact on mission. Don’t over-think this. Don’t worry that it’s not absolutely perfect. Just start at the beginning – what are the most important objectives, how will we know if we’ve achieved them, and what data do we need to measure those indicators? That’s metrics that matter.
Web Analytics tools commonly in use provide the ability to instrument web processes through “Scenario” or “Goal” analysis. Properly set up, they can track visitors step-by-step through defined processes and tell you how many succeed, how many fail or abandon, and, perhaps most importantly, when/where they fall out of the process. Knowing this last item helps you focus on those steps of the process where the most people fall out to figure out why–and how to fix them.