Measurement practices varied greatly, with some organizations measuring communications effectiveness extensively and others operating primarily through intuition—though many were taking advantage of the availability and ease of common web-enabled measures.

Most organizations used basic tracking mechanisms available to measure the effectiveness of their online communications; some have learned to apply these analytics to make specific enhancements to their websites and other web-based communications.

Many also reported tracking press coverage and document and sharing internally anecdotal comments they receive. A few organizations that consider members to be their highest priority audience reported tracking member interactions and participation with some form of relationship management system.

Some interview subjects said they actively seek key audience input or feedback through surveys and focus groups they conduct themselves or contract with a third party. Many others considered proactive forms of audience research and results measurement to be a luxury.

A few organizations said that they do nothing to track the effectiveness of particular communications tools or materials—but that they track only outcomes, which they said, they are unable to trace back to specific communications efforts.

Very few of those interviewed believed that training influenced their practice of measurement; many said they do not recall whether measurement was part of training.


Web-enabled measures

For the newsletter and the website, we do the usual analytics and compare the use of them over time.

We do a lot of tracking of email marketing and social media—it’s so much easier when you can use online metrics.

For our newsletter, we measure open rates, views of the videos linked from the newsletter.

We use Constant Contact, so we know who opens [our communications], who deletes them, who unsubscribes. We haven’t done much analysis yet.

We are using Sales Force now to track the activity level of our members…who’s downloading what, posting things, attending workshops.

We have lots of web use data, but I’m not sure about the brochures.

Pro-active feedback gathering

We commission third-party evaluations regularly to gain feedback from reporters and the policy community. We ask them, have you heard of us? What do you think of us or this work? What has been helpful to your work?

We do member surveys.

When we do ask for actions, we get good responses, like a significant number of signatures—and this lets us know that these people are reading our materials, that they get it, and that they’re willing to step up if we ask them.

As a result of fundraising and annual campaign calls, we’ve experienced increased giving, and that speaks well for our marketing effectiveness.

We use some of the more accessible measurement tools, e.g., focus groups, that are low-cost and don’t take a lot of time to implement. We look at return on investment with individualized donor campaigns.

Anecdotal feedback

Informally we have measured. Not formally.

We get anecdotal feedback on our 16-pager.

Applying measurement findings

We’ve learned through web analytics that we can target messages to individuals based on their use of our website. We are just starting to do this and already we’ve seen the difference that targeting can make.

Our communications are good and tested. We speak to our audiences and not to ourselves. Also, we have thoughtful data behind the policy changes we recommend, and we’re able to put that data out in a way that makes sense to an average person on the street.

If we see someone unsubscribe who we think shouldn’t unsubscribe, we usually follow up with that person.

We’ve re-engineered to increase [web] traffic in areas where we weren’t seeing it—to get the user actions we desired.

Our user survey led to a change in our newsletter. We identified how people were reading the newsletter… We decided to reduce the length and number of articles.

During healthcare reform, we started with a “call your congress rep” campaign. When that wasn’t successful, we switched to a “write your editor” approach with an online letter template.  Participation increased, and we had several letters published in local papers.

Limits on measuring

We measure as much as is useful because we know that sometimes measuring to the nth degree is expensive [and not worth the expense].

Ultimately, we measure whether the person funds our [initiatives]. We’re still learning that there are better uses of time than measuring communications tools. Each of us gets enough anecdotal feedback and validation of our materials that we’ve had no red flags that suggest we should measure more closely.

Measuring outcomes (beyond outputs)

Our main measure if we’re reaching our targets is that we see our points reflected in legislation and in initiative documents.

We don’t measure the effectiveness of communications tools. What we measure is impact—the changes we see in legislation, that we convene groups that come to the table, that we recruit other groups.

Blog activity we measure. We also measure the number of times we are quoted in the press. At the end, it’s more important that projects get implemented—though this is not directly related to communications tools lots of times.

Training influence

We learned how to do some [measurement] at training and some we learned separately.

No. I don’t think training covered measurement.

I don’t remember if we covered measurement.

I’m not sure there was a lot in training about measurement. It was more on the strategic side—not much regarding metrics. We may need a part two of communications training that would focus on measurement.

Comments are closed.