Follow Social Velocity on Google Plus Follow Social Velocity on Facebook Follow Nell Edgington on Twitter Follow SocialVelocity on Linked In View the Social Velocity YouTube Channel Get the Social Velocity RSS Feed

Download a free Financing Not Fundraising e-book when you sign up for email updates from Social Velocity.

A Nonprofit Culture of Measurement: An Interview with Mary Winkler

By Nell Edgington



Mary WinklerIn today’s Social Velocity interview I’m talking with Mary Kopczynski Winkler, senior research associate with the Center on Nonprofits and Philanthropy at the Urban Institute. Mary is a nationally recognized expert in the field of performance measurement and management.   She is a founding member of the Leap of Reason Ambassadors Community, a private community of nonprofit thought leaders and practitioners committed to increasing the expectation and adoption of high performance in the social sector and who released the Performance Imperative earlier this year.

You can read past interviews in the Social Velocity interview series here.

Nell: PerformWell is an effort among Urban Institute, Child Trends and Social Solutions to offer tools and strategies for human services nonprofits to measure their work. How successful has this effort been and what are your plans for continuing to grow the capacity of nonprofits to measure their work?

Mary: PerformWell is a free, interactive, web-based resource designed to help human services nonprofits gain knowledge about performance management, access tools and resources they need to better service clients and meet outcomes, and obtain strategies for effective, efficient service delivery. Launched in March 2012, the demand for PerformWell has exceeded our expectations with more than 400,000 visitors (from all 50 states and more than 200 countries); 25,000 individuals have registered for our webinars; and more than 140,000 assessment tools have been downloaded from our site. Webinar survey results are routinely high, but we are working to put additional systems in place to track how nonprofits are using various aspects of PerformWell and to what end.

In 2013, the PeformWell partners engaged in a business planning process with Root Cause. Market research confirmed our views about a large unmet need for performance measurement knowledge and high interest in the resources offered through PerformWell, but that additional products and services are also desired, such as webinar training series, regional user conferences, and customized engagements with nonprofits. Users wanted a more interactive web-experience.

Our short- to medium-term goals include substantial updates to the website to improve the user experience (we also plan to solicit user feedback during and after these changes are implemented); development of additional products and services better aligned with the feedback obtained from the market research undertaken by Root Cause; and exploration of partnerships and sponsorships with nonprofits, consultants and funders to generate additional revenue and resources to expand the content, reach and use of PerformWell to improve the adoption and application of performance measurement and management practice across the nonprofit sector.

Nell: Some believe that measurement is perhaps more straightforward for human services nonprofits — you can measure change to an individual’s behavior or life circumstances — but measurement is more difficult for arts organizations or advocacy groups. What are your thoughts on that?

Mary: Sometimes I think this argument serves as a convenient excuse for organizations to avoid putting even the most basic systems in place to track progress or otherwise hold themselves accountable to their constituents. In 2007, with support from the Hewlett Foundation, the Urban Institute and the Center for What Works, we published a series of simple frameworks, as part of our Outcome Indicators Project, to help nonprofits in 14 program areas engage in performance measurement. Two of these areas are advocacy and performing arts. The Urban Institute also provided research support to the Performing Arts Research Coalition (PARC) to develop standardized surveys to help performing arts organizations across the country obtain more routine and better data from audience members, subscribers, and the community.

Establishing a causal link between advocacy or arts interventions and impact is, in my view, more challenging than for human service organizations. In the case of advocacy organizations, it can be very difficult to isolate the contributions of a particular campaign or even organization to a policy or legislative outcome.

It is, however, possible to devise strategies for capturing information on earlier stage outcomes, such as increased awareness.

I recently participated on a panel at the annual OPERA America conference – on “internal metrics for civic impact.” As much as measurement activities have evolved from the days of the PARC coalition, I observed that most of the metrics and data points were still very internally focused on measures of participation and attendance and fall well-short of anything approximating community or civic impact. I encouraged those present to consider stepping away from a focus on the impact of an individual opera company’s contribution to civic impact, and recommended instead more of a collective impact approach in collaboration with other arts, civic, and education organizations in a community.

In this case, I even hesitated to use the word “impact,” and suggested the group consider distinguishing between collective contribution toward a modest set of civic outcomes (e.g., performing arts promote understanding of other cultures or are a source of pride for those in the community) and the more traditional causal attribution usually reserved for the term “impact.”

Nell: Caroline Fiennes, among others, has argued that individual nonprofits should actually do less evaluation and rather rely on larger research studies to prove their theories of change. What do you make of that argument and the difference between evaluation and measurement? 

Mary: I agree with some of what Caroline puts forth here – particularly her observations about “withholding (unflattering research) and publication bias” – an issue that University of Wisconsin-Madison professor Donald Moynihan has termed “performance perversity.” I also agree both with her suggestion that evaluations be done by a third-party to reduce any tendencies toward subjective reporting or bias and her endorsement of a greater consideration of shared metrics.

I am troubled, however, by the fact that only 7% of UK social-purpose organizations are interested in improving services, and her somewhat cavalier suggestion that monitoring and evaluation “wastes time and money.” Although she is not alone in this second argument (see for example Bill Shambra’s “take-down” of Charity Navigator’s efforts to encourage greater use of performance metrics in “Charity Navigator 3.0: The Empirical Empire’s Death Star?”), such sweeping generalizations undermine the legitimate and courageous attempts of many nonprofits to use data for program improvement efforts.

I agree with Phil Buchanan in that there is a “moral imperative” to make an honest attempt to understand if resources are being used effectively and certainly to guard against the possibility that programs could be doing more harm than good as organizations like Latin American Youth Center and Harlem Children’s Zone have discovered and since corrected.

I see measurement as a necessary practice for every nonprofit. But measurement is different from evaluation. Nonprofits need to start by developing a measurement infrastructure that makes sense for their organization – one that supports their mission and commitment to serve and improve the lives of their clients or constituents – not one that is reactionary and responsive to funders. It is precisely this kind of infrastructure that can lay the groundwork for a more rigorous evaluation, at a time that is right and appropriate for the organization’s stage in development.

I see measurement and evaluation along a continuum of inquiry that should be designed to support the learning objectives of an organization. Measurement helps organizations to take the day-to-day or month-to-month pulse of various activities and program results – these snapshots in time or scorecards help managers and service providers understand trends and provide an opportunity to correct, modify or otherwise adapt operations.

Evaluation is, by definition, more rigorous, more expensive, and takes considerably more time to see results. Evaluation serves a very important role as organizations make decisions about whether to continue, grow, scale or otherwise expand services, but it needs to occur at the right time – and certainly not as an organization is just getting off the ground.

Nell: It is difficult for most nonprofits to find funding for measurement work. For example, in the most recent Nonprofit Finance Fund State of the Sector survey, 69% of nonprofit respondents said their funders rarely or never cover the costs of measurement. How do we change that, or can we?

Mary: Although I am sympathetic to this argument and argue frequently that foundations have a unique and critical role to play in helping to build the capacity of nonprofits to better engage in measurement and evaluation, I think we need to change the conversation to one that focuses on the shared responsibility between nonprofits and funders for making the necessary investments in measurement and evaluation.

If nonprofits are truly ready to embrace a culture of measurement and high performance, then they need to reorganize operations in ways that embed measurement practice at every level of the organization, and change expectations from front-line workers all the way to the board of directors.

This means things like: defining expectations about data collection in job descriptions; setting aside a small percentage of funding for evaluation as a line-item in every grant request; and using data in meaningful ways in everyday discourse. Likewise, funders need to work more collaboratively with grantees to understand the data needs and capacity of nonprofits, consider funding longer-term grants that build in support for measurement and evaluation, and stop asking for data or reports that aren’t part of the conversation about continuous improvement and learning. Funders, too, can support field-building efforts to develop additional tools and resources in support of the measurement work nonprofits seek to accomplish.

There are a number of exemplary efforts already underway including Edna McConnell Clark Foundation’s Propel Next and the World Bank Group’s support of Measure4Change and the East of the River Initiative. Each of these efforts feature: targeted grants to build measurement and evaluation capacity of participating nonprofits; access to technical assistance resources; and a community of practice to help grantees learn from each other, share successes or failures, and reduce what is all too often a sense of isolation among measurement and evaluation practitioners.

Photo Credit: Urban Institute

Learn more about nonprofit innovation and
download a free Financing Not Fundraising e-book
when you sign up for email updates
from Social Velocity.


About the Author: Nell Edgington is President of Social Velocity (www.socialvelocity.net), a management consulting firm leading nonprofits to greater social impact and financial sustainability. Social Velocity helps nonprofits grow their programs, bring more money in the door, and use resources more effectively. For more information, check out Social Velocity consulting services and clients.


Tags: , , , , , ,


6 Comments to A Nonprofit Culture of Measurement: An Interview with Mary Winkler

Caroline Fiennes
June 9, 2015

I’m really delighted that this is being debated.

One thing that I’d like to clarify is that I’ve not said (and don’t think) that *all* charity monitoring and evaluation wastes time and money. That’s clearly not the case. The Economist had a nice example two weeks ago of a charity (SolarAid) using data to improve, and many of us know of many other such examples.

It’s my view – and the implication of the evidence – that *some* evaluation wastes time and money, most notably by being poor quality and/or not being published. [We’re not alone here: it’s thought that maybe 85% of medical research is wasted for reasons including these.]

I’m not blaming charities for all that. For example, Giving Evidence recently found a charity whose government contracts routinely include about £6k (~$10k) for evaluation, which is far too low for anything rigorous and reliable.

Mary Kopczynski Winkler
June 10, 2015

Caroline, I appreciate the back and forth, here, and completely agree that we should NOT be promoting an expectation that evaluation – especially when poorly done or insufficiently resourced – is a minimum requirement.

Nonprofits and charities should attempt to capture basic data about their services, outputs and client results…but these data need to be meaningful and useful to the program.

As they mature, expand, and grow their operations, more should be expected, including consideration of whether more rigorous evaluation methods are appropriate.

PerformWell recently sponsored a webinar, based on an essay by Kris Moore, et al (in Mario Morino’s Leap of Reason monograph) based on the argument that performance measurement is often the neglected step in becoming an evidence-based program.

Here is a link to this webinar: http://www.performwell.org/index.php/webinars/323-becoming-evidence-based-a-step-by-step-approach

And a link to Leap of Reason, which contains the original essay: http://leapofreason.org/get-the-books/leap-of-reason/get-leap-of-reason/

Isaac Castillo
June 11, 2015

Great conversation Caroline and Mary. I wanted to add a few things here.

In my opinion, I think people conflate and confuse the terms performance management and evaluation. Making things worse, I think people underestimate how much it costs to do a ‘good’ evaluation.

Caroline – while I would agree that spending $10K on an evaluation is probably a waste of money (because a good evaluation will cost much more than that) – spending that same $10K on performance management within the nonprofit can lead to some really powerful and useful information.

Part of this problem is funder related. Funders keep pushing for ‘impact’ without understanding the term (in the evaluation sense) and by also underestimating the cost to get it. Nonprofits then fall into the trap of spending money to try to prove ‘impact’ without themselves knowing that means or what it costs to do right.

That creates a vicious cycle where nonprofits do bad measurement just to satisfy funders (who themselves often don’t know enough to help with the process).

However, if the nonprofit spent that $10K on inward looking performance management that is focused on getting data that is useful to the nonprofit, they can learn a lot. And improve their services.

To help people understand the differences between the sister concepts of performance management and evaluation, Ann Emery and I created this short (5 minute) presentation:
https://www.youtube.com/watch?v=nC7AG8XxrI4

My hope is to get nonprofits to collect data (and spend money on data collection) for the purpose of improving their services. Not to keep funders happy.

Any amount that nonprofits spend on measuring the effectiveness of what they do is money well spent.

Mary Winkler
June 11, 2015

Isaac, Thanks for bringing up the issue of terminology. Confusion over terms is a very big problem and contributes to much misunderstanding. Impact, for example, is a term I try to avoid using at all in the performance measurement/management conversation. I look forward to working with other members of the Leap of Reason Ambassadors Community, which has a working group identified, to try to make some headway sorting through the myriad terms – that when used or interpreted differently can stall progress (or worse, become yet another excuse for inaction).

Isaac Castillo
June 11, 2015

Agreed Mary. I actually never use the term “impact”. And I try to correct others when they use it – or at least ask them what they mean when they use the term.

In the evaluation world, “impact” usually is only obtained through RCTs (or even multiple RCTs). Since most funders aren’t willing to pay for RCTs, then I tell them it is unfair for them to expect nonprofits to demonstrate ‘impact’.

I prefer the term effectiveness – or even outcomes.

Bob Penna
June 25, 2015

This has been a very valuable discussion. Interesting your give-and-take about the word “impact.” Our friend David Hunter is ALWAYS on me about using the word too freely. I will try to keep Issac’s point in mind…especially in front of audiences that seem to often freely interchange “performance,” “results,” and impact” when we’re discussing outcomes. Thanks guys!

Leave a comment


Share




Popular Posts


Search the Social Velocity Blog