In today’s Social Velocity interview, I’m talking with Isaac Castillo, Director of Outcomes, Assessment, and Learning at Venture Philanthropy Partners, where he leads VPP’s approach to data collection, data reporting, and outcome measurement.
Prior to coming to VPP, Isaac served as the Deputy Director for the DC Promise Neighborhood Initiative (DCPNI). At DCPNI, Isaac led efforts to improve outcomes in the Kenilworth-Parkside community in Ward 7 of the District of Columbia through the strategic coordination of programmatic solutions and research-based strategies. Prior to his time at DCPNI, Isaac served as a Senior Research Scientist at Child Trends where he worked with nonprofits throughout the United States on the development and modification of performance management systems and evaluation designs. In addition, Isaac was also the Director of Learning and Evaluation for the Latin American Youth Center (LAYC) where he led the organization’s evaluation and performance management work.
You can read interviews with other social change leaders here.
Nell: You have spent your career using data to improve the performance of the nonprofits for which you worked. Why do you think performance management is so important for nonprofits? Do you think all nonprofits should pursue performance management? When does it make sense and when doesn’t it?
Isaac: I believe that every nonprofit should pursue some form of performance management because they owe it to the clients they serve. Most nonprofits will assume that they are making a positive difference in people’s lives, but in the vast majority of cases they are just guessing. Using some form of performance management will allow every nonprofit organization to confirm this thinking and to identify areas that can and should be improved so that the next cohort of participants can get better services than the last.
Unfortunately, one of the greatest challenges preventing a nonprofit from implementing some form of performance management isn’t a lack of resources, expertise, or time. It is fear. The fear that they will find out that their work isn’t having a positive effect. This fear is what nonprofit leaders need to overcome, not for the benefit of themselves or their organization, but because they owe it to the clients they serve today and the clients they will serve in the future. I believe that every nonprofit should strive to serve tomorrow’s clients better than today’s clients, and one of the only ways to ensure that this happens is the sustained use of performance management.
The type of performance management that each nonprofit should pursue should vary by the size and scope of their work. At a minimum, small nonprofits should be tracking basic demographic and attendance information on their participants, and hopefully at least one meaningful output or outcome. Whether this occurs in a computerized system or in a spiral paper notebook is up to the nonprofit. But it doesn’t have to be costly, and it doesn’t take expertise. It only takes the will and desire to improve as a nonprofit.
Nell: In the nonprofits in which you’ve worked how have you been able to secure resources to fund performance management? What is the case you and your colleagues have made to funders and what do you think it will take to get more funders investing in performance management?
Isaac: Raising funding for performance management work usually takes a mix of several different strategies and approaches for potential and existing funders.
First, I strongly encourage nonprofits to include some percentage (1 to 5 percent – possibly more) of funding in each grant submission or proposal dedicated to supporting performance management and outcome measurement work. By placing this small percentage into each proposal, a nonprofit can begin to raise funds for internal evaluation and performance management activities. It may not seem like a lot, but it can add up, and eventually generate enough funds for a half-time or full-time position to support in-house performance management work.
Second, I also strongly encourage nonprofits to engage in regular ‘funder education’ – where a nonprofit proactively meets with their funders to have ongoing conversations about outcome measurement and evaluation. This allows both the funder and the nonprofit to come to agreement on measurement expectations and to ensure that both groups are focused on the same concepts. I often suggest that the first of these types of meetings focuses on each group’s definitions of three commonly misunderstood terms: outputs, outcomes, and impact.
Finally, I would recommend that the nonprofit and funder have an honest discussion regarding expectations of results and the funding necessary to support the related evaluation work. If a funder is expecting an random control trial (RCT) to be completed to determine ‘impact,’ then the nonprofit should be willing to push the funder to support a large investment to pay for a high quality evaluation. If the funder is only willing to support a small amount for outcome measurement, then the nonprofit should clearly articulate what is possible.
Nell: Ken Berger and Caroline Fiennes recently argued that we may have gone too far by asking nonprofits to produce research about their own outcomes. What’s your response to that argument?
Isaac: I fully support Ken and Caroline in their argument that most nonprofits should stay away from trying to produce impact research. The desire for ‘impact’ is something that has been (and continues to be) pushed unfairly (and without financial support) by the funding community.
I honestly think a lot of confusion in this space comes from inconsistent use and understanding of the term ‘impact’. The term ‘impact’ has a precise definition among researchers but is often used in a much broader context among funders, nonprofits, and the general public. In the research and evaluation world, impact is used to describe the effectiveness of a program while eliminating as many potential confounding factors as possible. That is why the use of random control trials (RCTs) is usually the cornerstone of impact research – RCTs are the easiest way to control for and eliminate confounding factors.
When most non-researchers use the term ‘impact’ however, they are usually just asking if the program or organization works and if it is making a difference for its intended service population. That is a much lower bar to set, and yet it is a critical distinction in this discussion. If you are thinking about ‘impact’ as a researcher, you will need a large amount of resources and expertise to determine ‘impact,’ which usually means completing one or more formal evaluations. If you are thinking about ‘impact’ in the more general sense and less strict way, then pursuing some form of performance management system will allow a nonprofit to determine if their efforts have been successful.
I do think every nonprofit should pursue some form of performance management to ensure that their work is having a positive effect as a complement to existing research that others have done. Relying only on the use of others’ research does not guarantee that a nonprofit will provide effective services and achieve positive outcomes. This type of research is a like a recipe – it shows what has worked in the past and provides a guide for the nonprofit – but a recipe can still be ruined with poor implementation or planning.
Every nonprofit has an obligation to the people they serve (and not to their funders) to ensure that their programming is having a positive effect (or at the very least not causing harm). Without some form of performance management system in place (even one that just uses paper and pencil), a nonprofit will never know if they have strayed too far from the recipe provided by previous research.
I also think there are a growing number of very sophisticated nonprofits that should be using AND producing research on effective programs. Every year, I see more and more nonprofits that hire talented and unbiased researchers dedicated to internal evaluation and outcome measurement work. These individuals are just as talented and unbiased as their colleagues working in traditional research and evaluation organizations. They can, and should, produce original research that can help inform the nonprofit field. The real challenge comes in nonprofit organizations finding the resources to support the hiring and retention of these individuals. Not every nonprofit will have the resources or capacity to hire one or more of these individuals – but those that do should absolutely be trying to produce original outcome and impact research to provide ‘recipes’ for effective programming that nonprofits with fewer resources can use in the future.
Nell: Your former organization, DC Promise Neighborhoods, is part of the national Promise Neighborhoods Initiative launched by the US Department of Education in 2010 and modeled after the famous Harlem Children’s Zone. How successful has this national replication of a successful local model been? Have you been able to replicate outcomes? And what hurdles, if any, have you and other replication sites found?
Isaac: I think that there has been some initial success among the Promise Neighborhoods. Part of the challenge that all the Promise Neighborhoods face is that the Harlem Children’s Zone did not achieve their success overnight. They have been working in Harlem for decades, so it would be unrealistic to believe that the Promise Neighborhoods would be able to create large scale change in a matter of a few years.
However, there are signs of progress across all of the Promise Neighborhoods. Each of the Promise Neighborhoods started to address a few outcomes with the initial round of funding, and these outcomes varied. Some focused on math and reading proficiency for students, some focused on obtaining medical homes for young children, and others sought to increase the amount of healthy food consumed by residents. In DC, we focused on improving school attendance.
I do think that most of the 12 Promise Neighborhood Implementation grantees were able to make progress on the outcomes they identified as initial focus areas. However, the very nature of the work (creating community level change) doesn’t lend itself to the rapid accomplishment of multiple outcomes in a short period of time. Each of the Promise Neighborhoods had to prioritize certain outcomes for their respective communities, and only several years later are they able to claim success and begin to identify the next set of outcomes to be addressed. So while certain outcomes haven’t necessarily been replicated across all the Promise Neighborhoods, that is due to the differences in priorities and community conditions rather than any problem with the model itself.
Photo Credit: Venture Philanthropy Partners
There is an interesting report out today on the effectiveness of the Social Innovation Fund (SIF). Authored by the Social Innovation Research Center (SIRC), a nonpartisan nonprofit research organization, the new report details what has worked and what hasn’t in the six year history of the SIF.
Launched by the Obama administration in 2009, the SIF — a program within the Corporation for National and Community Service — provides significant funding to foundations that follow a venture philanthropy model by regranting that growth capital, along with technical assistance, to evidence-based nonprofits in “youth development, economic opportunity, and healthy futures” areas. In 2014, SIF expanded its efforts to include a portfolio of Pay for Success (social impact bond) grantees.
Now, 6 years on it is interesting to take a look back to understand what, if any, effect SIF has had on the nonprofit sector. The effect of the SIF is also critical given that, as of right now, the House and Senate have both defunded SIF in their respective funding bills.
To date, the SIF portfolio is made up of $241 million of federal investments and $516 million in private matching funds, which was invested in 35 intermediary grantees and 189 subgrantee nonprofits working in 37 states and D.C.
The SIRC report focuses on the current progress of SIF grants made during the first three years of the program (2010-2012). The report finds two clear positive results for the SIF so far. The SIF has:
- Added to the nonprofit sector’s evidence base about which programs work, and
- Built the capacity of nonprofit subgrantees, especially in the areas of “performance management systems, evaluations, financial management, regulatory compliance systems, and experience with replicating evidence-based models.”
On the negative side, however, the report finds that the SIF put real burdens on funders and nonprofits with its fundraising match requirements and the federal regulatory requirements. The report also finds that the SIF has had little effect on the sector as a whole because the SIF has not very broadly communicated their learnings so far.
To me, of course, most interesting are the report’s finding about capacity building at nonprofit subgrantees. There is such a need for nonprofit capacity building in the sector, and this was a clear goal of the SIF.
The SIF is one of few funders that do more than pay lip service to performance management by actually investing in building the capacity of nonprofits to do it. However, the SIF has been criticized for mostly selecting nonprofits that already had strong capacity. And indeed, the SIRC report finds that the SIF was most successful among those nonprofits that already had high capacity (in performance management, fundraising function, etc.) prior to SIF funding. Indeed, the report found that “poorly-resourced intermediaries working with less well-resourced community based organizations have been at a disadvantage.”
One SIF grantee in particular, The Foundation for a Healthy Kentucky, really struggled to build the capacity of their subgrantees whose starting capacity was so low. As they put it:
During the course of participation, it became clear that…[SIF] was really better suited for replicating existing programs or, at a minimum, investing in well-established programs that had some level of sophistication around organization systems and evaluation.
This mirrors earlier criticism of the SIF that it was set up to grow only those nonprofits that were already doing well, while those nonprofits that struggled with basic capacity issues were left out. The SIF has struggled to determine whether it is funding innovation (new solutions with limited capacity), or proven solutions (with a long track record and the corresponding capacity). It seems the two are mutually exclusive.
What the SIF is trying to do is such tricky business. To identify, fund and and scale solutions that work is really the holy grail in the social change sector. Certainly there are hurdles and missteps, but I think it’s exciting when government gets in the social change game in a big way. Six years is really too soon to tell. So I hope that this brief SIF experiment is allowed to continue, and we can see what a social change public/private partnership of this scale can really do.
To read the full SIRC report go here.
Photo Credit: Obama signs the Serve America Act in 2009, Corporation for National and Community Service
This spring I have been trumpeting the Performance Imperative, a detailed definition of a high-performing nonprofit released by the Leap Ambassador community in March. Today I continue the ongoing blog series describing each of the 7 Pillars of the Performance Imperative with Pillar #2: Disciplined, People-Focused Management.
With this second Pillar, the Performance Imperative obviously makes a distinction between “leaders” in Pillar 1, and “managers” in Pillar 2. There is a note in the Performance Imperative that “leaders” and “managers” are typically two separate people in nonprofits with budgets over $1 million. So this distinction, and perhaps this Pillar, applies only to larger nonprofits.
But I think there is actually application to any nonprofit. In any nonprofit there are leadership tasks (creating the vision, being the cheerleader, marshaling resources) and there are management tasks (making sure the trains run on time, putting each resource to its highest and best use). In smaller organizations both sets of tasks fall to the same person, yet they both still need to be performed well. So I think it behooves any size nonprofit to analyze whether they are BOTH leading and managing well.
Effective managers put organization resources to their highest and best use. They recruit, train and retain the right talent, they use data to make good decisions, they manage to performance, and they are accountable.
You can read a larger description of Pillar 2 in the Performance Imperative, but here are some of the characteristics of a nonprofit that exhibits Disciplined, People-Focused Management:
- Managers translate leaders’ drive for excellence into clear workplans and incentives to carry out the work effectively and efficiently.
- Managers…recruit, develop, engage, and retain the talent necessary to deliver on the mission.
- Managers provide opportunities for staff to see…how each person’s work contributes to the desired results.
- Managers establish accountability systems that provide clarity at each level of the organization about the standards for success and yet provide room for staff to be creative about how they achieve these standards.
- Managers acknowledge when staff members are not doing their work well…managers are not afraid to make tough personnel decisions so that the organization can live up to the promises it makes.
The Center for Employment Opportunities (CEO) is an example of how strong management is necessary to create a culture of high-performance. CEO employs people entering parole in New York State in transitional jobs at government facilities while helping them access better paying, unsubsidized employment. CEO Chief Operating Officer, Brad Dudding described to me how CEO management created, over the past 10 years, a culture and system of high performance.
Here is his story:
In the early years, CEO focused program performance on meeting individual contract milestones, not a set of unified organizational outcomes. They were proficient in collecting data and reporting it to funders, but did not use data to track participant progress, to make course corrections, and to manage to short-term outcomes.
In 2004 the Edna McConnell Clark Foundation provided CEO with a multi-year capital investment to:
- Create a theory of change as a blueprint for program intervention and outcomes measurement.
- Develop a performance measurement system to track progress toward those outcomes.
- Nurture a performance culture that uses data to understand program progress, build knowledge and correct performance gaps.
First, CEO management had to agree on a theory of change and the specific outcomes for which the organization would hold itself accountable. Next, management shared the theory of change with staff and demonstrated how each staff member contributed to its achievement through an all staff event, follow-up trainings and consistent messaging that the organization was entering an exciting period of change. CEO then adopted a new performance measurement system to reinforce the theory of change.
But reorienting the organization was not easy. Not everyone was ready to embrace a new culture of performance accountability and data tracking. CEO management was initially surprised by staff resistance and responded impatiently with compliance measures. Looking back, not enough time was invested in staff training and promoting the value proposition of new changes. At times it was an enormous effort to get front line staff to track and use data everyday to ensure participant goals were being met.
But the tipping point came when CEO promoted early adopters of the data system to management positions. These new managers were comfortable operating in a data-driven environment and holding others accountable to use data to track program participants’ progress. Once there was a group of strong managers in place, CEO’s performance culture started to take hold and program outcomes improved.
By 2010, CEO was managing to annual performance targets and short-term outcomes through staff’s real-time documentation and data analysis.
In 2012, the results of a three-year randomized control trial showed that CEO’s program resulted in a reduction in recidivism of 16-22%. But the evaluation also uncovered a need to improve CEO’s strategies for advancing long-term employment and for connecting individuals to the full-time labor market. In response, CEO created a job retention unit and developed innovative job retention strategies, including training programs and financial incentives for participants.
In 2013, CEO entered the New York State Social Impact Bond, the first state-sponsored transaction, through which CEO will serve 2,000 high-risk parolees in New York City and Rochester between 2014 and 2018. If CEO hits benchmarks and reduces the use of prison and jail beds by program participants, investors will be repaid their principal and will receive a return of up to 12.5% by the U.S. Department of Labor and New York state.
The tenets of a performance based culture — supportive leadership, disciplined managers, goal setting, data collection and analysis to track and improve outcomes — are now fully accepted by CEO staff and reinforced by management. CEO now has a highly developed system of tactical performance management, which allows the organization to know on a daily basis if it is delivering on its promise to its participants.
Photo Credit: Australian Paralympic Committee
In today’s Social Velocity interview I’m talking with Mary Kopczynski Winkler, senior research associate with the Center on Nonprofits and Philanthropy at the Urban Institute. Mary is a nationally recognized expert in the field of performance measurement and management. She is a founding member of the Leap of Reason Ambassadors Community, a private community of nonprofit thought leaders and practitioners committed to increasing the expectation and adoption of high performance in the social sector and who released the Performance Imperative earlier this year.
You can read past interviews in the Social Velocity interview series here.
Nell: PerformWell is an effort among Urban Institute, Child Trends and Social Solutions to offer tools and strategies for human services nonprofits to measure their work. How successful has this effort been and what are your plans for continuing to grow the capacity of nonprofits to measure their work?
Mary: PerformWell is a free, interactive, web-based resource designed to help human services nonprofits gain knowledge about performance management, access tools and resources they need to better service clients and meet outcomes, and obtain strategies for effective, efficient service delivery. Launched in March 2012, the demand for PerformWell has exceeded our expectations with more than 400,000 visitors (from all 50 states and more than 200 countries); 25,000 individuals have registered for our webinars; and more than 140,000 assessment tools have been downloaded from our site. Webinar survey results are routinely high, but we are working to put additional systems in place to track how nonprofits are using various aspects of PerformWell and to what end.
In 2013, the PeformWell partners engaged in a business planning process with Root Cause. Market research confirmed our views about a large unmet need for performance measurement knowledge and high interest in the resources offered through PerformWell, but that additional products and services are also desired, such as webinar training series, regional user conferences, and customized engagements with nonprofits. Users wanted a more interactive web-experience.
Our short- to medium-term goals include substantial updates to the website to improve the user experience (we also plan to solicit user feedback during and after these changes are implemented); development of additional products and services better aligned with the feedback obtained from the market research undertaken by Root Cause; and exploration of partnerships and sponsorships with nonprofits, consultants and funders to generate additional revenue and resources to expand the content, reach and use of PerformWell to improve the adoption and application of performance measurement and management practice across the nonprofit sector.
Nell: Some believe that measurement is perhaps more straightforward for human services nonprofits — you can measure change to an individual’s behavior or life circumstances — but measurement is more difficult for arts organizations or advocacy groups. What are your thoughts on that?
Mary: Sometimes I think this argument serves as a convenient excuse for organizations to avoid putting even the most basic systems in place to track progress or otherwise hold themselves accountable to their constituents. In 2007, with support from the Hewlett Foundation, the Urban Institute and the Center for What Works, we published a series of simple frameworks, as part of our Outcome Indicators Project, to help nonprofits in 14 program areas engage in performance measurement. Two of these areas are advocacy and performing arts. The Urban Institute also provided research support to the Performing Arts Research Coalition (PARC) to develop standardized surveys to help performing arts organizations across the country obtain more routine and better data from audience members, subscribers, and the community.
Establishing a causal link between advocacy or arts interventions and impact is, in my view, more challenging than for human service organizations. In the case of advocacy organizations, it can be very difficult to isolate the contributions of a particular campaign or even organization to a policy or legislative outcome.
It is, however, possible to devise strategies for capturing information on earlier stage outcomes, such as increased awareness.
I recently participated on a panel at the annual OPERA America conference – on “internal metrics for civic impact.” As much as measurement activities have evolved from the days of the PARC coalition, I observed that most of the metrics and data points were still very internally focused on measures of participation and attendance and fall well-short of anything approximating community or civic impact. I encouraged those present to consider stepping away from a focus on the impact of an individual opera company’s contribution to civic impact, and recommended instead more of a collective impact approach in collaboration with other arts, civic, and education organizations in a community.
In this case, I even hesitated to use the word “impact,” and suggested the group consider distinguishing between collective contribution toward a modest set of civic outcomes (e.g., performing arts promote understanding of other cultures or are a source of pride for those in the community) and the more traditional causal attribution usually reserved for the term “impact.”
Nell: Caroline Fiennes, among others, has argued that individual nonprofits should actually do less evaluation and rather rely on larger research studies to prove their theories of change. What do you make of that argument and the difference between evaluation and measurement?
Mary: I agree with some of what Caroline puts forth here – particularly her observations about “withholding (unflattering research) and publication bias” – an issue that University of Wisconsin-Madison professor Donald Moynihan has termed “performance perversity.” I also agree both with her suggestion that evaluations be done by a third-party to reduce any tendencies toward subjective reporting or bias and her endorsement of a greater consideration of shared metrics.
I am troubled, however, by the fact that only 7% of UK social-purpose organizations are interested in improving services, and her somewhat cavalier suggestion that monitoring and evaluation “wastes time and money.” Although she is not alone in this second argument (see for example Bill Shambra’s “take-down” of Charity Navigator’s efforts to encourage greater use of performance metrics in “Charity Navigator 3.0: The Empirical Empire’s Death Star?”), such sweeping generalizations undermine the legitimate and courageous attempts of many nonprofits to use data for program improvement efforts.
I agree with Phil Buchanan in that there is a “moral imperative” to make an honest attempt to understand if resources are being used effectively and certainly to guard against the possibility that programs could be doing more harm than good as organizations like Latin American Youth Center and Harlem Children’s Zone have discovered and since corrected.
I see measurement as a necessary practice for every nonprofit. But measurement is different from evaluation. Nonprofits need to start by developing a measurement infrastructure that makes sense for their organization – one that supports their mission and commitment to serve and improve the lives of their clients or constituents – not one that is reactionary and responsive to funders. It is precisely this kind of infrastructure that can lay the groundwork for a more rigorous evaluation, at a time that is right and appropriate for the organization’s stage in development.
I see measurement and evaluation along a continuum of inquiry that should be designed to support the learning objectives of an organization. Measurement helps organizations to take the day-to-day or month-to-month pulse of various activities and program results – these snapshots in time or scorecards help managers and service providers understand trends and provide an opportunity to correct, modify or otherwise adapt operations.
Evaluation is, by definition, more rigorous, more expensive, and takes considerably more time to see results. Evaluation serves a very important role as organizations make decisions about whether to continue, grow, scale or otherwise expand services, but it needs to occur at the right time – and certainly not as an organization is just getting off the ground.
Nell: It is difficult for most nonprofits to find funding for measurement work. For example, in the most recent Nonprofit Finance Fund State of the Sector survey, 69% of nonprofit respondents said their funders rarely or never cover the costs of measurement. How do we change that, or can we?
Mary: Although I am sympathetic to this argument and argue frequently that foundations have a unique and critical role to play in helping to build the capacity of nonprofits to better engage in measurement and evaluation, I think we need to change the conversation to one that focuses on the shared responsibility between nonprofits and funders for making the necessary investments in measurement and evaluation.
If nonprofits are truly ready to embrace a culture of measurement and high performance, then they need to reorganize operations in ways that embed measurement practice at every level of the organization, and change expectations from front-line workers all the way to the board of directors.
This means things like: defining expectations about data collection in job descriptions; setting aside a small percentage of funding for evaluation as a line-item in every grant request; and using data in meaningful ways in everyday discourse. Likewise, funders need to work more collaboratively with grantees to understand the data needs and capacity of nonprofits, consider funding longer-term grants that build in support for measurement and evaluation, and stop asking for data or reports that aren’t part of the conversation about continuous improvement and learning. Funders, too, can support field-building efforts to develop additional tools and resources in support of the measurement work nonprofits seek to accomplish.
There are a number of exemplary efforts already underway including Edna McConnell Clark Foundation’s Propel Next and the World Bank Group’s support of Measure4Change and the East of the River Initiative. Each of these efforts feature: targeted grants to build measurement and evaluation capacity of participating nonprofits; access to technical assistance resources; and a community of practice to help grantees learn from each other, share successes or failures, and reduce what is all too often a sense of isolation among measurement and evaluation practitioners.
Photo Credit: Urban Institute
In today’s Social Velocity interview I’m talking with Daniel Stid, Senior Fellow at the William and Flora Hewlett Foundation. Daniel serves as an advisor to Foundation president, Larry Kramer, leading the exploration of a potential Foundation initiative to support and improve the health of democracy in the US. Before joining the Foundation, Daniel was a longtime consultant and strategist to governments, nonprofits, and for-profit organizations, including as a partner in The Bridgespan Group’s San Francisco office, where he co-led the organization’s performance measurement practice.
You can read past interviews in the Social Innovation Interview Series here.
Nell: You moderated a panel at the recent After the Leap conference about government and performance management. Government has a long history in the outcomes space, but there was some controversy at the conference about whether government can really lead this new movement. What role should government play in this new push toward nonprofit performance management?
Daniel: Yes, my Twitter feed was blowing up during that session with people adamantly saying that government couldn’t lead this push, it had to be nonprofits! To my mind this controversy misses the point. It presumes a hierarchy – that leadership is lodged in one place, and that it is exercised in one direction. The fact is that if we are going to make this “leap” happen, we need distributed leadership in multiple places: in government agencies, in operating nonprofits, in foundations, among researchers and program developers.
A great example is the Teen Pregnancy Prevention program administered by the Office of Adolescent Health in the federal department of Health and Human Services, the implementation of which I recently wrote about with some former colleagues at The Bridgespan Group. The Office of Adolescent Health administrators demonstrated leadership in conceiving and developing a bold and thoughtful program; the researchers and purveyors involved demonstrated leadership in creating evidence-based solutions and effectively supporting their implementation; and front line agencies demonstrated leadership in implementing these interventions with fidelity. What makes this program so compelling is that it has been animated by multiple forms of leadership that are networked and reinforcing each other across sector lines. I believe this same pattern occurs in most other situations where social change is happening at a large scale.
Nell: Your charge as a senior fellow at the Hewlett Foundation is to help explore how the foundation can “support and improve the health of democracy in the United States.” There have been some criticisms lately that philanthropy has moved away from supporting democracy and instead sometimes enhances wealth inequality. What are your thoughts?
Daniel: Insofar as this occurs, I believe this an inadvertent effect from the standpoint of individual donors. Most people want to give to something they can point to and/or that they can have affiliation with – hence the contributions of many donors to hospitals and arts organizations and universities, or to the schools that their children attend. This is straightforward and understandable. You can readily see and appreciate and be associated with what you are getting for your contributions. And it is philanthropy. We shouldn’t presume that all philanthropy can or should be geared toward reducing inequality. That is not the point of philanthropy in a free society. (Now whether all philanthropy needs to be and should be subsidized by the tax code is another question; I am on the record as saying it is high time to revisit the charitable deduction.)
The kinds of interventions that stand a chance of alleviating inequality – e.g., support for high quality early education, or effective teen pregnancy prevention – entail large-scale systems change and diffuse and uncertain impact for people typically living in very different communities from the philanthropists who are in a financial position to support them. They are for that reason a riskier philanthropic proposition. But many individual donors and foundations are making these investments anyway, and I bet we will see more of them do so as the evidence-base supporting solutions to inequality continues to be solidified.
Nell: Moving nonprofits to a performance management system will be costly. Do you think government can and should foot that bill, or can philanthropy? How do we create and fund the infrastructure necessary for this movement to truly succeed?
Daniel: Really good question! I don’t think that we can count on government to do it – for all of government’s resources relative to those of philanthropy, it is extremely rare that a government program will have the political and policy degrees of freedom, let alone the budget, to invest in nonprofit capacity in any sustained way. And the age of austerity we are in will only worsen this shortfall. To me this is a critical role for philanthropy to play. Just a portion of the billions that philanthropy puts to work in the service of education, health and human services, youth development, etc. could help assess and put to much better use the hundreds of billions that federal, state and local governments do across these areas.
Typically foundations see their role as scaling up initiatives that government can then “take out” and fund directly, freeing up the foundations to move on and fund their next ventures. Foundations should stay engaged rather than moving on and, by investing in the infrastructure and measurement capacity that government cannot pay for, help society get the most out of the far greater levels of government spending. Rather than seeking to “leverage” other foundations, to use some jargon, foundations should in effect be seeking to “leverage” government funding by increasing its impact.
Nell: Should every nonprofit work towards articulating and measuring outcomes, or does it primarily apply only to social service and education nonprofits? Is there a way for arts and cultural organizations, for example, to move toward outcomes management?
Daniel: I think every enterprise – whether it be a profit-seeking business, a government agency, or a nonprofit, whether it is producing cars and trucks, health and human services, or arts and culture — should seek to get better at what it does. I found Jim Collins very persuasive on this point in his “Good to Great in the Social Sector.”
The desire to improve, to get better at things, is woven into the human psyche, and when this desire is given full expression, by individuals and the organizations they work in, so is our humanity. Whether this quest involves “outcomes” and “measurement” as we conventionally define them depends on context. It may well involve tracking audience surveys and visitor numbers and assessments by informed critics. But it may also involve a troupe rehearsing until it feels it finally has its performance nailed, or a museum director continuing to refine interpretive material that she thinks visitors are struggling to understand. Those behaviors reflect a relentless quest for outcomes in their own right. At the end of the day, performance measures are merely proxies to help us assess our progress toward what we are working towards: an underlying excellence. The excellence itself is really the point.
Photo Credit: Hewlett Foundation