In today’s Social Velocity interview I’m talking with Mary Kopczynski Winkler, senior research associate with the Center on Nonprofits and Philanthropy at the Urban Institute. Mary is a nationally recognized expert in the field of performance measurement and management. She is a founding member of the Leap of Reason Ambassadors Community, a private community of nonprofit thought leaders and practitioners committed to increasing the expectation and adoption of high performance in the social sector and who released the Performance Imperative earlier this year.
You can read past interviews in the Social Velocity interview series here.
Nell: PerformWell is an effort among Urban Institute, Child Trends and Social Solutions to offer tools and strategies for human services nonprofits to measure their work. How successful has this effort been and what are your plans for continuing to grow the capacity of nonprofits to measure their work?
Mary: PerformWell is a free, interactive, web-based resource designed to help human services nonprofits gain knowledge about performance management, access tools and resources they need to better service clients and meet outcomes, and obtain strategies for effective, efficient service delivery. Launched in March 2012, the demand for PerformWell has exceeded our expectations with more than 400,000 visitors (from all 50 states and more than 200 countries); 25,000 individuals have registered for our webinars; and more than 140,000 assessment tools have been downloaded from our site. Webinar survey results are routinely high, but we are working to put additional systems in place to track how nonprofits are using various aspects of PerformWell and to what end.
In 2013, the PeformWell partners engaged in a business planning process with Root Cause. Market research confirmed our views about a large unmet need for performance measurement knowledge and high interest in the resources offered through PerformWell, but that additional products and services are also desired, such as webinar training series, regional user conferences, and customized engagements with nonprofits. Users wanted a more interactive web-experience.
Our short- to medium-term goals include substantial updates to the website to improve the user experience (we also plan to solicit user feedback during and after these changes are implemented); development of additional products and services better aligned with the feedback obtained from the market research undertaken by Root Cause; and exploration of partnerships and sponsorships with nonprofits, consultants and funders to generate additional revenue and resources to expand the content, reach and use of PerformWell to improve the adoption and application of performance measurement and management practice across the nonprofit sector.
Nell: Some believe that measurement is perhaps more straightforward for human services nonprofits — you can measure change to an individual’s behavior or life circumstances — but measurement is more difficult for arts organizations or advocacy groups. What are your thoughts on that?
Mary: Sometimes I think this argument serves as a convenient excuse for organizations to avoid putting even the most basic systems in place to track progress or otherwise hold themselves accountable to their constituents. In 2007, with support from the Hewlett Foundation, the Urban Institute and the Center for What Works, we published a series of simple frameworks, as part of our Outcome Indicators Project, to help nonprofits in 14 program areas engage in performance measurement. Two of these areas are advocacy and performing arts. The Urban Institute also provided research support to the Performing Arts Research Coalition (PARC) to develop standardized surveys to help performing arts organizations across the country obtain more routine and better data from audience members, subscribers, and the community.
Establishing a causal link between advocacy or arts interventions and impact is, in my view, more challenging than for human service organizations. In the case of advocacy organizations, it can be very difficult to isolate the contributions of a particular campaign or even organization to a policy or legislative outcome.
It is, however, possible to devise strategies for capturing information on earlier stage outcomes, such as increased awareness.
I recently participated on a panel at the annual OPERA America conference – on “internal metrics for civic impact.” As much as measurement activities have evolved from the days of the PARC coalition, I observed that most of the metrics and data points were still very internally focused on measures of participation and attendance and fall well-short of anything approximating community or civic impact. I encouraged those present to consider stepping away from a focus on the impact of an individual opera company’s contribution to civic impact, and recommended instead more of a collective impact approach in collaboration with other arts, civic, and education organizations in a community.
In this case, I even hesitated to use the word “impact,” and suggested the group consider distinguishing between collective contribution toward a modest set of civic outcomes (e.g., performing arts promote understanding of other cultures or are a source of pride for those in the community) and the more traditional causal attribution usually reserved for the term “impact.”
Nell: Caroline Fiennes, among others, has argued that individual nonprofits should actually do less evaluation and rather rely on larger research studies to prove their theories of change. What do you make of that argument and the difference between evaluation and measurement?
Mary: I agree with some of what Caroline puts forth here – particularly her observations about “withholding (unflattering research) and publication bias” – an issue that University of Wisconsin-Madison professor Donald Moynihan has termed “performance perversity.” I also agree both with her suggestion that evaluations be done by a third-party to reduce any tendencies toward subjective reporting or bias and her endorsement of a greater consideration of shared metrics.
I am troubled, however, by the fact that only 7% of UK social-purpose organizations are interested in improving services, and her somewhat cavalier suggestion that monitoring and evaluation “wastes time and money.” Although she is not alone in this second argument (see for example Bill Shambra’s “take-down” of Charity Navigator’s efforts to encourage greater use of performance metrics in “Charity Navigator 3.0: The Empirical Empire’s Death Star?”), such sweeping generalizations undermine the legitimate and courageous attempts of many nonprofits to use data for program improvement efforts.
I agree with Phil Buchanan in that there is a “moral imperative” to make an honest attempt to understand if resources are being used effectively and certainly to guard against the possibility that programs could be doing more harm than good as organizations like Latin American Youth Center and Harlem Children’s Zone have discovered and since corrected.
I see measurement as a necessary practice for every nonprofit. But measurement is different from evaluation. Nonprofits need to start by developing a measurement infrastructure that makes sense for their organization – one that supports their mission and commitment to serve and improve the lives of their clients or constituents – not one that is reactionary and responsive to funders. It is precisely this kind of infrastructure that can lay the groundwork for a more rigorous evaluation, at a time that is right and appropriate for the organization’s stage in development.
I see measurement and evaluation along a continuum of inquiry that should be designed to support the learning objectives of an organization. Measurement helps organizations to take the day-to-day or month-to-month pulse of various activities and program results – these snapshots in time or scorecards help managers and service providers understand trends and provide an opportunity to correct, modify or otherwise adapt operations.
Evaluation is, by definition, more rigorous, more expensive, and takes considerably more time to see results. Evaluation serves a very important role as organizations make decisions about whether to continue, grow, scale or otherwise expand services, but it needs to occur at the right time – and certainly not as an organization is just getting off the ground.
Nell: It is difficult for most nonprofits to find funding for measurement work. For example, in the most recent Nonprofit Finance Fund State of the Sector survey, 69% of nonprofit respondents said their funders rarely or never cover the costs of measurement. How do we change that, or can we?
Mary: Although I am sympathetic to this argument and argue frequently that foundations have a unique and critical role to play in helping to build the capacity of nonprofits to better engage in measurement and evaluation, I think we need to change the conversation to one that focuses on the shared responsibility between nonprofits and funders for making the necessary investments in measurement and evaluation.
If nonprofits are truly ready to embrace a culture of measurement and high performance, then they need to reorganize operations in ways that embed measurement practice at every level of the organization, and change expectations from front-line workers all the way to the board of directors.
This means things like: defining expectations about data collection in job descriptions; setting aside a small percentage of funding for evaluation as a line-item in every grant request; and using data in meaningful ways in everyday discourse. Likewise, funders need to work more collaboratively with grantees to understand the data needs and capacity of nonprofits, consider funding longer-term grants that build in support for measurement and evaluation, and stop asking for data or reports that aren’t part of the conversation about continuous improvement and learning. Funders, too, can support field-building efforts to develop additional tools and resources in support of the measurement work nonprofits seek to accomplish.
There are a number of exemplary efforts already underway including Edna McConnell Clark Foundation’s Propel Next and the World Bank Group’s support of Measure4Change and the East of the River Initiative. Each of these efforts feature: targeted grants to build measurement and evaluation capacity of participating nonprofits; access to technical assistance resources; and a community of practice to help grantees learn from each other, share successes or failures, and reduce what is all too often a sense of isolation among measurement and evaluation practitioners.
Photo Credit: Urban Institute