In today’s Social Velocity interview I’m very excited to be talking with the co-founders and editors of the new History of Philanthropy blog: Benjamin Soskis, Stanley Katz, and Maribel Morey.
The HistPhil blog launched this past June and focuses on how history can shed light on current philanthropic issues and practice.
Because how can we hope to create social change without understanding the results of efforts that came before us?
Ben, Stanley, and Maribel are all academics with specialities related to history and philanthropy. Stanley is on faculty at Princeton’s Woodrow Wilson School and has also taught at Harvard, Wisconsin and Chicago. Benjamin is a Fellow at the Center for Nonprofit Management, Philanthropy and Policy at George Mason University and a consultant for the history of philanthropy program of the Open Philanthropy Project. And Maribel is a professor of history at Clemson University and is currently writing a book, From Tuskegee to Myrdal, which describes how and why white Americans in big philanthropy transformed from proponents of segregated education to advocates of racial equality.
Nell: Stanley, you write, in your inaugural post for the HistPhil blog, about the tendency of philanthropy to get swept up in “new” approaches that actually aren’t all that new. Is there really anything new in philanthropy right now? Are there any structural or cultural developments or approaches in philanthropy that are significantly different than in the past?
Stanley: It is hard to separate rhetoric from reality in the current environment of philanthropic hype. From my perspective, the current boasting that all is new in philanthropy (see the recent New York Times “Giving” section), is pretty uninformed (naïve?).
One of the most common claims, repeated frequently in the New York Times piece, is that philanthropists are no longer simply trying to alleviate the “symptoms” of distress, but in fact are aiming to remove the underlying causes of social and physical problems. This attempts to distinguish what the large foundations are doing from what the traditional foundations did in the 20th century (and of course no one is making this claim more loudly than Judith Rodin of the “new” Rockefeller Foundation.)
But the emphasis on the elimination of problems by identifying their root causes was the innovative claim of the founders of the first American foundations, best articulated by Andrew Carnegie and John D. Rockefeller, Sr. So from this point of view there is not much new in the current aims of big philanthropy.
But what is actually new, and there is a lot that is new, is the determined focus on short-term, measurable, results — this is the mantra of the genuinely new “strategic” philanthropy. The older foundations of course aimed to be effective, but they defined effectiveness much more loosely and measured it less precisely than current large foundations. This is an enormousdly important attribute of the current mega-foundations, and all the other foundations that have jumped on the “strategic philanthropy” bandwagon.
The current foundation rhetoric also makes use of a wide range of business metaphors, none more important than the notion that philanthropy is best thought of as “investment” in change, and frequently characterized, using the language of hedge funds, as “bets” on successfully producing change. Much of the current language of philanthropy is drawn from venture capital activity, and the new philanthropy can also be thought of as “venture” philanthropy. This is a new attitude.
The original philanthropists knew they were adapting the then modern techniques of business organization and management to their grantmaking, but they thought of philanthropy as different from business. That distinction seems to have eluded much of the current generation of philanthropists.
But I need to say that I am a little uncomfortable with these large generalizations, since not all current philanthropists speak or act as I have just suggested — nor did the earliest generation of philanthropists. But there is something new in the philanthropic air. The question is whether that air is as salubrious as its current advocates claim.
Nell: Stanley, philanthropy got its modern day start in the missionary work of Europeans and Americans in third world countries. What, if any, parallels do you see in philanthropic work in developing parts of the world today?
Stanley: Here the important fact is that the Rockefellers (John D. Sr. and Jr.) originally intended the Rockefeller Foundation to be a missionary foundation, operating mostly (possibly entirely) in China. For a variety of reasons, in particular the influence of their advisor Frederick T. Gates (a minister who had turned in a secular direction), they abandoned the missionary focus in favor of a secular focus. Their work in China, and especially the founding and support of the Peking Union Medical School, continued to have a missionary flavor, but their work in Africa and other tropical areas was more early medical philanthropy than missionary philanthropy. They turned to the eradication of tropical diseases both because they were attractive to current medical research capacity, and because it was politically safe to engage in medical experimentation abroad — a lesson that Big Pharma learned from them later in the century.
But the emphasis of the large foundations, beginning in the 1960s, with grant-making in the underdeveloped world, was quite different, and unrelated to any neo-missionary instinct. Many of the large American foundations at mid-century thought they could assist the process of decolonization and local self-determination by supporting a wide range of development activities in what was then called the Third World. They later came to be attacked by neo-Marxists for allegedly supporting US and Western imperialism in the developing world, but that is a big subject all in itself.
Ironically, there is now a burgeoning effort by American evangelical business people to invest in private development projects, especially in East Africa, and this is a throw-back of sorts to much earlier notions of philanthropic support of development. But it needs to be contrasted with the massive Gates Foundation public health efforts in Africa and elsewhere — an effort purely “strategic” in its inspiration.
Nell: Ben, historically, philanthropic giving has not grown above 2% of US GDP, why do you think that is and do you think there is any hope of changing that?
Ben: The answer to the 2% conundrum is the holy grail of the nonprofit sector, and I don’t pretend to have any certain answer about it myself. It’s worth noting, though, that 2% of GDP is still pretty good relative to other developed countries (in fact, by many measures, it’s one of the best rates). But it’s still confounding why it hasn’t budged for more than four decades. There’s obviously a tangle of causal factors at play, and I’ll just offer a few possibilities that have occurred to me in the course of my research, without making any claims that this is an exhaustive list.
Given the persistence of that rate, it makes sense to look for some equally persistent characteristic of the American nonprofit sector that has also remained unchanged over that long timespan. A recent article in the Chronicle of Philanthropy can give us a clue to a possible candidate. As part of their Philanthropy 400 ranking of the nation’s largest nonprofits, they note how little the list has changed from when it was first tallied in 1991 (especially when compared with the churning of the list of the largest for-profit companies). In part by dint of habit, and in part because of the power of the institution’s “brands,” Americans have tended to stick with a handful of large charities—through scandals, evolving social needs and changing fads.
As I pointed out to the Chronicle reporter (though my observations got a bit lost in translation; Josephine Shaw Lowell, a founder of the American charity organization movement, wouldn’t have suggested that bigger is better, only that a degree of centralization in charity administration was necessary), we can trace this development back to the turn of the last century, when charity reformers instituted a process of centralized, bureaucratized and professionalized giving. That is, from the late 19th century-scientific charity movement onward, individuals were warned that their disparate giving was too often haphazard, scattered, wasteful, and overlapping, and so were encouraged to hand over the administration of charitable resources to a centralized institution. The community chests and the United Way came out of this impulse; Catholic Charities succumbed to it as well.
It’s very possible that the development toward more centralization and professional administration has bolstered American giving by providing citizens with more confidence and by making decisions about where to give easier. But I think we also have to wonder whether it imposed a sort of cap as well, since it might have removed some of the immediacy, intimacy and individuality from the charitable exchange that could push individuals to give beyond an initial comfort point (which very well might be around 2%).
The Chronicle suggests that we might see more disruption in the list in the coming years, or at least that some of the big names, like the United Way, might be ceding ground. If that is the case, and if some of the space they occupied is filled with smaller upstarts, it’s possible we might see some movement beyond 2%.
Another possible factor worth considering for the persistence of the 2% rate is the declining role of religion in determining charitable allocations. I don’t only mean that the percentage of total giving going to religious institutions has been steadily declining over the last few decades. But also that giving itself has, for many Americans, become an increasingly secular activity.
Again, we can trace this back to the early 20th century, when charity reformers sought to “secularize” giving by stripping it of any sectarian taint and endowing it with a degree of rationality; the indiscriminate giver in their rhetoric was often an easily-duped priest. But it is also possible that the religious impulse to give is more easily able to push past the equilibrium of 2% and to ask individuals to make even deeper financial commitments.
Yet another factor preventing giving from crossing that 2% barrier might be media coverage of nonprofits. As I quipped in an article on the subject in the Chronicle last March, borrowing from Woody Allen, the coverage is generally pretty weak—and the portions are too small. That is, the media grants the sector relatively little attention, and when it does, it seems to suffer from what New York Times reporter David Clay Johnson has called a “Madonna-whore” complex: alternating between feel-good human interest stories and stories focused on nonprofit abuse. But stories that chronicle the difficult and important work many nonprofits are doing on a daily basis—they just don’t have the journalistic juice to make it into print. As the former nonprofit beat reporter for The New York Times, Stephanie Strom, told me, “A nonprofit just doing good isn’t news because everyone knows nonprofits are supposed to do good.” This might be changing, with a number of important online journalistic ventures out there, but I think there is a deep deficit in public knowledge about what nonprofits are doing—and this deficit could sap the public’s willingness to give more.
You also have to combine this media deficiency with the general conceptual muddle that has emerged with the blurring of private and public lines of funding social welfare provision in the last half century. Not only do American givers and tax-payers have to contend with a federated system (to say nothing of international structures of governance), in which various jurisdictions take up differing responsibilities for addressing social ills and needs. But we also inhabit what political scientist Jacob Hacker has termed a “divided welfare state,” in which public and private lines of responsibility for social welfare are increasingly blurred. Obviously, there’s opportunity in this blurring. But as scholars such as Lester Salamon have pointed out, it also can represent a sort of existential threat to the nonprofit sector’s distinctive identity and mission, which in turn might be restricting American’s willingness to dig in and give more.
Finally, it’s worth pointing out another powerful strain in the American charitable tradition—the devaluation of monetary gifts themselves in favor of the “helping hand.” At the turn of the last century, even while scientific charity reformers were attempting to rationalize giving, they were also trying to preserve traditions of neighborly assistance. The fact that such assistance could not be easily quantified and rationally appraised was regarded as a mark of its worth. And in many senses, it was considered a higher form of giving than monetary contributions. That idea is still with us today; and it’s possible that by focusing too much on the 2% rate, we miss other forms of voluntarism that have had more variability and elasticity over the years.
Nell: Maribel, during the Gilded Age great wealth concentrated among a few brought large philanthropy (Carnegie, Rockefeller, etc.) but also contributed to a subsequent progressive period (as the pendulum swung back against that excessive wealth). Do you see parallels between the Gilded Age and today, and do you think we are heading for a more progressive period? And what role do you think philanthropy will or won’t play in that?
Maribel: Indeed, many late nineteenth- and early-twentieth century Americans looked at Andrew Carnegie’s and John D. Rockefeller’s wealth (and even their philanthropy) with some suspicion.
Reflecting these Americans’ anxieties, for example, the United States Congressional Commission on Industrial Relations called John D. Rockefeller Sr. and his son in 1915 to defend the independence of the Rockefeller Foundation. As many scholars have noted, the Rockefellers had established a division of economic research in 1914 within the one-year-old foundation; and a few months later, the Ludlow massacre occurred at the Rockefeller’s Colorado Fuel Iron Company where women and children died when the state militia assaulted the strikers’ tent camp.
In response, the organization decided to organize a study on industrial relations under this new division and selected a close working friend of John D. Rockefeller Jr. (William Lyon Mackenzie King) to direct it. From the perspective of the American public, it was hardly easy to trust that gilded age tycoons who had undermined the rights of workers in the process of accumulating their wealth would have the interests of the people in mind when they funded social scientific projects to study the American populace. From this perspective, the Rockefeller Foundation was the playpen of industrialists who had defined interests in society and their policy-oriented social scientific research would be—far from disinterested—an extension of those interests.
And far from ignorant of Americans’ suspicions about gilded age levels of wealth, Andrew Carnegie himself discussed it head-on in The Gospel of Wealth (1889). Aware that Americans might find socialism an attractive alternative to capitalism, for example, he pitched philanthropy as the better form of wealth redistribution.
Today as then, Americans are confronting and discussing the great influence of leading philanthropists in public policymaking and of wealth inequality more broadly. However, I am not convinced that we are necessarily heading for a more progressive period.
I say this because I don’t see contemporary Americans reflecting the same level of angst about elite philanthropy nor with the broader topic of wealth concentration. Congress isn’t questioning leading philanthropists as it did with the Rockefellers in the early twentieth century nor do leading philanthropists seem threatened by Americans’ potential voting patterns, as Carnegie had been.
One key explanation might be that these earlier Americans entertained a vastly different meaning of American democracy than their successors today. For them, American democracy promised economic opportunity (or rather, freedom from class divisions) and an equal voice over public concerns. Today, it seems that the general American public and their representatives in Congress aren’t as convinced of this definition of American democracy. With a narrower understanding of American democracy, it might simply be more difficult for contemporaries to see how wealth inequality and elite philanthropy in public policymaking are democratic threats.
Philanthropies committed to resurrecting a more progressive period might just need to focus on ways to revive this earlier (dare I say, more robust) definition of American democracy and help empower Americans to fight for it.
Photo Credit: HistPhil
I guess I am on a case study kick this week. I do think that actual examples of the paths other nonprofits followed in order to become more effective or more sustainable can be really helpful to other nonprofit leaders in the trenches. So in that spirit, I offer a case study of a small, startup nonprofit ready to grow their impact and their sustainability.
The thing I love about my job the most is that I get to work one-on-one with super smart people who are coming up with innovative solutions to making the world a better place. In particular, lately I’ve been lucky enough to work with some groups in the civic technology space, a really exciting emerging area where innovative technology solutions are used to make government, and ultimately democracy, more effective.
One of these groups, The Engaging News Project (ENP) is a startup nonprofit aimed at helping news organizations better meet their democratic and business goals in a digital age.
While ENP enjoyed success and the support of some key funders over the past two years, they were ready to move from the project phase to an established organization with sustainable funding and a long-term strategy for achieving impact on the digital news industry.
So ENP hired me to lead their strategic planning effort. With my guidance, ENP created an advisory group of staff and key stakeholders. I led the group to analyze the external environment in which ENP operates, develop their theory of change, define the audiences they want to target, and articulate the goals and objectives and corresponding financial projections of the next 3 years for the organization. I also helped staff create a year 1 operational plan to help execute and monitor the strategic plan.
The end result was a clear 3-year strategic plan with accompanying financial model and an engaged and excited staff and group of advisors.
Because of their new strategic plan, ENP has focused their project development efforts, clearly defined where and with whom they want to work, and detailed their goals for the next 3-years.
They are now working to implement the strategic plan. They are identifying new funders to help support the growth of the organization, expanding their collaborative partners, creating a formal advisory board, and streamlining operations. ENP staff are excited about the new direction and are actively working to have a greater impact on the future of digital news.
As Talia Stroud, Director of the Engaging News Project put it,
As a new entity, we had been doing more of the day-to-day work and hadn’t taken the time to think about the bigger picture of where the Engaging News Project was headed and how to get there. Social Velocity helped us to chart a future direction, hone our messaging, and develop a clear plan for our organization. By working with us to figure out our targets, potential collaborators, and goals, Social Velocity helped us to systematically figure out a strong path forward. I can’t wait to see what we’ll be able to accomplish with these plans in place.
I’m excited to see where the Engaging News Project goes from here and the growing impact they will have on our democracy.
Photo Credit: Engaging News Project
This year on the blog I have been highlighting the Performance Imperative, a detailed definition of a high-performing nonprofit released by the Leap Ambassador community (of which I am a member) in March. Today I continue the ongoing blog series describing each of the 7 Pillars of the Performance Imperative with Pillar 3: Well-Designed and Implemented Programs and Strategies.
You can also read about Pillar 1: Courageous, Adaptive Leadership, and Pillar 2: Disciplined, People-Focused Nonprofit Management.
Pillar 3 describes being crystal clear about what your nonprofit exists to do, how you fit into the external environment, and how you develop and execute smart programs that result in your desired social change. This Pillar is essentially about creating and executing a Theory of Change.
The most important part, in my mind, of Pillar 3 is encouraging nonprofits to define the target population(s) they aim to serve. I have seen too many nonprofit organizations so focused on doing good that they don’t define who they are best positioned to serve and how that relates to who else may be serving them. Nonprofits must get clear about their place amid other services and interventions and, very specifically, who they are hoping to benefit or influence.
As always, you can read a larger description of Pillar 3 in the Performance Imperative (and I strongly encourage you to do so), but, in summary, a nonprofit that exhibits Well-Designed and Implemented Programs and Strategies:
- Is clear on the target population they serve.
- Bases the design of their programs on evidence informed assumptions about how the organization’s activities can lead to the desired change (a“theory of change”).
- Designs programs with careful attention to the larger ecosystem in which they operate.
- Implements their programs in a consistently high-quality manner and views collecting and using data as part of implementing high-quality programs.
- Guards against the temptation to veer off course in search of numbers that look good in marketing or funder materials.
Because I think case studies are so critical to understanding what high performance really looks like in a nonprofit, I asked Sam Cobbs, CEO of First Place for Youth, to explain how he led his organization to become a national model for helping foster kids to thrive.
Here is his story:
First Place went through an intensive theory of change process in 2008 where we explored what impact we wanted to make with youth and what type of activities and interactions it would take to achieve that impact. In addition, because the activities and interactions needed to be intensive (and therefore costly) we made the decision to focus our services on the most vulnerable youth. This was measured by how at risk a youth was using a risk assessment scale that took into account, among other factors:
- number of foster care placements
- years or days of homelessness
- job history
- education level, and
- the number and quality of support systems, including positive adult role models.
Based on this criteria, youth who had a higher risk factor score were given priority over youth with lower scores.
After establishing our target population, we began to collect data on what activities and interactions youth were having with the organization and started to analyze these trends. We were looking to understand what our population had in common so that we could understand who we were effective with and who we needed to create better interventions for.
Through this work we determined that we had 8 participant types at baseline and figured out which types we worked better with and what interventions were best used with these sub-populations. We then trained staff to deliver the interventions that were shown to work better with certain sub-populations.
We also began to understand that our sweet spot was kids who had multiple foster care placements, had experienced homelessness at some point, and had a high school diploma or GED. We also learned that we needed to get better with youth who had low risk factor scores because they had an extensive support network, had never experienced homelessness, and were somewhat stable while in foster care. This may go against what we naturally think — that a person with extensive support would do better, but our data showed the opposite. We were also not very good at working with single parents who did not have a high school degree. In the coming year we are going to redo this process using algorithms to see if we get the same results and trends.
If we see that we are not doing well in an area, we research the best practices to deal with that area and direct resources and time to delivering that intervention. For example, because of the data we realized that a portion of our youth had very high trauma scores. Therefore we said we needed to become better at working with youth who have had complex trauma at high rates. We then created an initiative to insure that everyone in the organization understood trauma and its impact on our youth and the best ways to address it. We will see at the end of this year if this investment in trauma informed training has paid off by increasing our outcomes and impact with the youth that we serve.
We are consistently looking at the data to understand where we are doing well and where we need to improve. Its the data, the data, the data.
Photo Credit: First Place for Youth
Note: As I mentioned earlier, I am taking a few weeks away from the blog to relax and reconnect with the world outside of social change. But I am leaving you in the incredibly capable hands of a rockstar set of guest bloggers. Next up is Kelly Born, program officer at the Hewlett Foundation working on their Madison Initiative, which focuses on reducing today’s politically polarized environment. Kelly also writes for the always thoughtful Hewlett Foundation blog. Here is her guest post…
In March of 2014, the William and Flora Hewlett Foundation launched a new initiative focused on US democracy reform, The Madison Initiative. The overarching goal is to “help create the conditions in which Congress and its members can deliberate, negotiate, and compromise in ways that work for more Americans.”
Our mandate is for a 3-year, exploratory initiative to assess whether and how the Foundation might be able to make a difference here. During this period, we are focused on three central questions:
- Are there solutions and approaches that are worth pursuing?
- Is there ample grantee capacity to pursue these ideas (or can we help build it)?
- Are there funding partners we can work with to make it happen?
In exploring this problem of congressional dysfunction we realized early on that, unfortunately, there don’t appear to be any silver-bullets that will solve this problem – it’s not as if campaign finance reform, nonpartisan redistricting, or increased voter turnout, taken on their own, would resolve our current democratic ails (even setting aside for the moment how hard it would be to actually achieve these changes!).
Regrettably, there is no clear consensus on what to do to improve the system, much less on how to do it. This may be, in part, why Inside Philanthropy awarded The Madison Initiative with 2014’s Big Foundation Bet Most Likely to Fail. Given this, our view has been that current congressional dysfunction is occurring in a system of systems (and sub-systems) that are interacting in complicated ways.
Early on we decided to develop a systems map rather than a theory of change to guide our work (working in close partnership with the Center for Evaluation Innovation and Kumu, collaborations we’ve written a bit about here). Theories of change typically outline desired (social or environmental) outcomes and then map backwards, linearly, to the activities and inputs necessary to achieve those outcomes. Systems maps are perhaps better suited for more complex, uncertain environments like democracy reform, where cause-and-effect relationships can be entangled and mutually reinforcing, rather than unidirectional.
Version 1.0 of our map includes more than 35 variables we believe are contributing to the problem, distributed across three key domains: Congress, Campaigns and Elections, and Citizens. In light of this complexity, rather than making an initial set of big bets on a few key variables, we have instead spread a series of smaller bets within these systems to see where grantees might gain traction, and what this reveals about the system’s more confounding parts.
The benefits of this approach are many – in fact, I cannot imagine effectively tackling this particular problem any other way. But employing this spread betting approach also involves a few challenges for us at Hewlett, and for our partners and grantees. The trade-offs are worth considering:
- We are acknowledging and respecting complexity, but this can sow seeds of confusion for our partners. Our approach has the essential benefit of taking into account the systemic complexity and interdependency of what we are trying to help change. We are avoiding over-simplifying and thereby misconstruing our reality (a good thing). But we are exploring more than 35 variables (ranging from deteriorating bipartisan relationships to the proliferation of partisan news media), with more than 60 active grantees. This approach can be hard to manage, and harder still to convey to others – especially anyone accustomed to a more linear and readily understandable theory of change.
- Our course correcting helps us learn, but has a real impact on partners. As we diversify our investments to learn more about what works, we will continue to learn more about which efforts are having the most impact on congressional dysfunction, and which are less germane to the problem. As we do, we will necessarily converge (and double down) on a few core interventions, while discontinuing others. This will mean disappointing organizations that we respect and had supported at the outset – an inevitable byproduct of this approach, but unpleasant for all involved.
- Our evidence-based approach risks coming off as overly academic. We are determined to avoid investing in solutions where there is not solid evidence to support their viability vis-à-vis our goals. This helps us avoid squandering funds on interventions that won’t, ultimately, work. But this approach also runs the risk of coming across as standoffish, academic, and idiosyncratic in the eyes of a practitioner-driven field that in some instances may be pursuing work that is harder to (or has yet to be) substantiated by solid research.
We’ve certainly got our work cut out for us. But we deeply believe that the social sector shouldn’t shy away from complex problems. We also believe that the benefits of this approach far outweigh the costs. It enables broad-based learning, and truly forces us to constantly re-think the grants we are making. Building in these tough choices, rather than forging ahead with a pre-defined strategy, requires that we not just learn, but that we act on what we discover. And fast.
In short, while beset by a few real challenges, we’re convinced that an emergent path is the best path forward. Surely we will place some wrong bets along the way. But, as a favorite colleague of mine often says, “it’s not like we’re selling cigarettes to children.” All of our grantees are doing great work – ultimately it will (not so simply) be a question of which of these lines of work is most likely to improve Congress.
In 2017, we will go back to our Board of Directors to discuss whether and how The Madison Initiative’s work will continue. In the meantime, we would love to hear how other funders have approached emergent problems like this – and how nonprofits might advise that we manage these inherent challenges as we progress?
Note: As I mentioned earlier, I am taking a few weeks away from the blog to relax and reconnect with the world outside of social change. But I am leaving you in the incredibly capable hands of a rockstar set of guest bloggers. First up is David Henderson, Director of Analytics for Family Independence Initiative, a national nonprofit which leverages the power of information to illuminate and accelerate the initiative low-income families take to improve their lives. David also writes his own blog, Full Contact Philanthropy, which is amazing. Here is his guest post…
In early June I was invited to be on a data mining panel at the Stanford Social Innovation Review Data on Purpose conference. The conference was full of nonprofit executives interested in tapping the big data revolution for social good. Naturally, the panel moderator asked us panelist to weigh in on if, and how, data was changing the social sector. Characteristically, I turned a feel-good question into a critique of the state of analytics in the social sector, which I’ve written about elsewhere and will expand on here.
Data is not changing the social sector. I would argue it’s not changing the world either. While it is very likely that data is changing your world, I do not believe data is changing the world.
For all the talk about how data is revolutionizing the world and that software is eating everyone’s lunch, the fact is that for the over two billion people who have no lunch to eat (literally and figuratively), the impact of the data revolution is muted, if nonexistent all together. Changing the world indeed.
The corporate data revolution has largely been fueled by data exhaust. Data exhaust is comprised of the various digital breadcrumbs you and I leave all over the Internet but that we might not think about as data in a traditional sense. For example, companies like Facebook and Amazon don’t simply log data when you click “submit”, they track your every movement around the Internet, logging every click and clack, allowing unprecedented marketing optimization. All these additional metrics are data exhaust, as consumers are almost passively generating data marketers can capture and monetize for almost nothing.
On the social sector data conference circuit, countless data-wonk hopefuls mindlessly espouse all the incredible things nonprofits can do now that data acquisition costs have been driven almost to zero. This is nonsense, as the social sector has no such data exhaust analogue, which is why the social sector doesn’t truly have big data.
Nonprofits often work with populations with a number of barriers, which drives up the cost of data acquisition relative to for-profit counterparts. Just some of the data collection barriers nonprofits grapple with include working with populations with low levels of literacy or limited to no access to technology. How exactly is one going to generate digital exhaust without any digital possessions in the first place, or while working three jobs to support her family?
Obviously, you don’t. The barriers too many people face in this world are exactly why nonprofits are in the business of social change in the first place. But it is also why we are so poorly poised to capitalize on the alleged data ubiquity, as that revolution is not permeating class boundaries to the extent technology evangelists would have us believe.
Another reason why data is not changing the world, or rather, why the social sector is failing to change the world with data, is that by and large we simply are not investing in the necessary capacity to turn data into insights.
While a new “data for the social sector” company with an unfortunate misspelling of a common word seems to pop up every day, there are very few companies actually building the tools the sector needs to put data in to action. Meanwhile, our technological overlords in Silicon Valley are depressingly stuck on the assumption that innovation in the social sector means fundraising software. Sigh.
If we want to use data to change the world, we need to think beyond software tools and simple (if colorful) data visualizations. Nonprofits need to invest in building their own analytical capacity, both by hiring analysts and also by investing in the entire staff’s ability to be intelligent consumers of data analysis.
Illusion of Insight
Everyone loves the idea of being data driven, but very few organizations actually want to make the investment. My employer, the Family Independence Initiative (FII), did make that investment. In turn, FII is now able to not only run regressions and build decision tree models, but can continuously learn from its data, augmenting every level of the organization from Chief Executive to line staff.
That investment is not cheap. Worse yet, like any good analyst, I can be a major buzz-kill. Much of my time is spent explaining why a particular regression coefficient doesn’t necessarily mean we are super awesome. In fact, a good analyst can make you less sure of your social impact.
But facing the tough reality paves the way to real impact. We cannot collectively do more without exactingly quantifying how little we’ve accomplished. These are tough truths, and most nonprofits would rather assume the hypothesis of their greatness, leaving no room for data’s insights.
The Path Forward
Just because data is not changing the world does not mean data cannot change the world. I believe it can, which is why I do what I do. While by and large nonprofits fail to invest in rigorous analysis, organizations like GiveDirectly are leading by example, showing what is possible when fact is paramount to fundraising.
Ultimately, being data driven is less about statistical techniques and more about a relentless commitment to the truth. The truth is that data is not changing the world. But if we, as a sector, can elevate the truth above all else, then we might just be able to change the world after all.
Tris is Director of Development for New Philanthropy Capital (NPC), a U.K. think tank and consultancy that works with both nonprofits and funders. Tris focuses on both the demand and supply sides of innovation around social impact. His particular interest is putting impact at the heart of the social sector, including shared measurement, open data and systems thinking. He helped initiate, and now coordinates, the Inspiring Impact program which aims to embed impact measurement across the UK charity sector by 2022. He is also a trustee of the Social Impact Analysts Association, a member of the EU GECES subgroup on impact measurement in social enterprise, and the Leap of Reason Ambassadors Community.
Nell: A big focus of your work at NPC is making impact measurement ubiquitous in the UK’s nonprofit sector. How far is there to go and how does the UK compare to the US in impact measurement being a norm?
Tris: There’s undoubtedly been significant progress over the last decade on impact measurement in the UK, and NPC has been at the heart of that. There are several ways in which that progress is visible, as well as in the sector level surveys NPC has done to track change. For example, most charities say that they have invested more in impact measurement in the last five years, and as a result we see that it is increasingly the norm for charities to have a defined theory of change, a role within the organisation to lead on impact measurement, and to talk about their impact measurement efforts in their public reporting. Most institutional funders also say that they look for evidence of charities’ impact measurement efforts in their funding decisions. Demand for measurement advice is growing, and the impact measurement industry is growing in response – there are more consultants offering services in this area.
The growth of social (or impact) investing has also driven greater interest in impact measurement. The industry as a whole acknowledges the centrality of impact measurement and the need for social returns to be as well evidenced as financial returns. There have been a number of key developments to move the field forward here, from Big Society Capital’s outcomes matrix to the G8 Social Impact Investment Taskforce and European GECES reports and guidance on impact measurement – all of which NPC has helped to deliver.
What’s not as clear is how much progress there’s been on the use of impact measurement, rather than its mere existence. When NPC repeats our field level state of the sector research in 2016, we’ll be asking a number of questions to tease out whether impact measurement activity is leading to use of impact evidence in decision-making – whether it’s becoming embedded in practice.
My concern is that we don’t see the signs that impact measurement is driving learning, improvement, decision-making or wholesale shifts in allocating resources towards higher impact interventions, programmes and organisations. It feels like impact measurement is something that everyone acknowledges we need to do, but few have worked out how to use. With the result that it’s bolted on to the reality of organisations delivering services and raising funding, but not embedded at the core.
A few examples of what I mean: if impact measurement were driving learning, I’d expect to see lots of organisations sharing their insights on success and failure, and learning from each other. I’d expect to see common measurement frameworks which allow organisations to understand their relative performance. These are still very rare. I’d also expect to see investment by funders and investors in the infrastructure that we know is needed for learning – journals, online forums and repositories and practitioner networks. There are some emerging examples of these, like the What Works Centres, but they’re still mostly just getting off the drawing board.
Most importantly I’d expect to see charities adjusting strategies and programmes in response to their learning. Maybe I’m not looking in the right places, but the examples I do see are the exception, not the norm.
When it comes to comparing the UK and US, it’s really hard. We don’t have comparable field-level studies, and we need to work together more closely on these if we want robust insights. For example, if you compare the findings in NPC’s 2012 paper with a recent US study it looks like nonprofits are more likely to say the main purpose of impact measurement is learning and improvement. But actually we don’t know if this is the result of the questions we asked and how we asked them.
In both the US and the UK, it’s clear that the rhetoric on impact measurement has advanced over the last decade. What’s not yet clear is how the reality underlying that has shifted.
Nell: While there are many similarities between the US and UK nonprofit sectors there are some fundamental differences, in particular views about how much government (vs. private charity) should do for public welfare. How does the UK’s view of government’s role help or hurt the capacity building efforts of nonprofits?
Tris: The UK government has taken on a leading role in the social investment space, and it’s here that efforts to build capacity are most visible. Investment readiness programmes have been introduced over the past few years to build general capacity to access social investment. More recently, impact readiness programmes have arrived to do the same for impact measurement capacity. NPC has been working within these programmes to help a number of charities, and cohorts of charities, and it’s clear that they can play a major role in helping the sector to improve. But capacity-building in general has felt the effects of austerity just as much as any other area of government funding. Perhaps more so, as limited funds are increasingly focused on service delivery, not on efforts to improve services.
When NPC repeats its survey of the field, I am certain that we’ll find that limited funding to develop impact measurement capacity is still the major barrier cited by charities. It doesn’t look like anything’s going to change that any time soon.
Nell: NPC works at the nexus between nonprofits and funders, helping the two groups to understand and adopt impact measurement. In the US few funders will fund impact measurement systems, even though they want the data. How does NPC work to convince funders of the need for investments in measurement (among other capacity building investments)? What progress have you seen and what’s necessary for similar progress to happen in the US?
Tris: While a proportion of funders have for a long time supported evaluation, the majority still don’t. We’ve worked through programmes like Inspiring Impact (a sector-level collaborative programme to help embed impact measurement) with a group of funders to develop principles, and help them to embed support for impact measurement in their practice. These efforts can help those who already see the benefit of capacity-building to advance their work, but it’s tough to engage those who aren’t already thinking in this way. I think that the leap we need to make is to selling impact measurement through its benefits, by showing how organisations improve, and their impact increases, as a result. And because impact measurement isn’t yet typically embedded in organisations, those benefits aren’t as evident as they should be.
What does seem to work well is trying to get funders and charities to work together in a specific outcome area to make progress, rather than making a general case for impact measurement. Cohort capacity-building programmes, learning forums and shared measurement initiatives are all part of this. The key thing here is that then the funder is committed to the outcomes everyone’s working towards, and impact measurement becomes a tool for everyone to achieve those outcomes together.
Nell: You are part of the Leap Ambassador Community that recently released the Performance Imperative. Have you seen similar interest groups forming around these issues in the UK? And what role do you think interest groups like these play in a norm shift for the sector?
I have been privileged to be part of this amazing community of leaders, and one of a minority initially from outside the US. I’m convinced we need a similar movement here in the UK, and globally and have been discussing whether and how to approach this with the group from the start. And as co-Chair of Social Value International – a network of those working in the social impact field, I’m part of an effort to do this at the practitioner level too.
The Leap Ambassadors Community brings a human face to what is often seen as a technical subject. After 11 years of working in the social impact field, I am convinced that we cannot sell impact measurement just by increasing the supply of good technical solutions. We need a movement to build the demand for those solutions. We need the right frameworks to measure impact and manage performance. But we need the leaders to demand them, and to harness them to hold themselves accountable, learn and improve, and share what they find.
Photo Credit: NPC
This spring I have been trumpeting the Performance Imperative, a detailed definition of a high-performing nonprofit released by the Leap Ambassador community in March. Today I continue the ongoing blog series describing each of the 7 Pillars of the Performance Imperative with Pillar #2: Disciplined, People-Focused Management.
With this second Pillar, the Performance Imperative obviously makes a distinction between “leaders” in Pillar 1, and “managers” in Pillar 2. There is a note in the Performance Imperative that “leaders” and “managers” are typically two separate people in nonprofits with budgets over $1 million. So this distinction, and perhaps this Pillar, applies only to larger nonprofits.
But I think there is actually application to any nonprofit. In any nonprofit there are leadership tasks (creating the vision, being the cheerleader, marshaling resources) and there are management tasks (making sure the trains run on time, putting each resource to its highest and best use). In smaller organizations both sets of tasks fall to the same person, yet they both still need to be performed well. So I think it behooves any size nonprofit to analyze whether they are BOTH leading and managing well.
Effective managers put organization resources to their highest and best use. They recruit, train and retain the right talent, they use data to make good decisions, they manage to performance, and they are accountable.
You can read a larger description of Pillar 2 in the Performance Imperative, but here are some of the characteristics of a nonprofit that exhibits Disciplined, People-Focused Management:
- Managers translate leaders’ drive for excellence into clear workplans and incentives to carry out the work effectively and efficiently.
- Managers…recruit, develop, engage, and retain the talent necessary to deliver on the mission.
- Managers provide opportunities for staff to see…how each person’s work contributes to the desired results.
- Managers establish accountability systems that provide clarity at each level of the organization about the standards for success and yet provide room for staff to be creative about how they achieve these standards.
- Managers acknowledge when staff members are not doing their work well…managers are not afraid to make tough personnel decisions so that the organization can live up to the promises it makes.
The Center for Employment Opportunities (CEO) is an example of how strong management is necessary to create a culture of high-performance. CEO employs people entering parole in New York State in transitional jobs at government facilities while helping them access better paying, unsubsidized employment. CEO Chief Operating Officer, Brad Dudding described to me how CEO management created, over the past 10 years, a culture and system of high performance.
Here is his story:
In the early years, CEO focused program performance on meeting individual contract milestones, not a set of unified organizational outcomes. They were proficient in collecting data and reporting it to funders, but did not use data to track participant progress, to make course corrections, and to manage to short-term outcomes.
In 2004 the Edna McConnell Clark Foundation provided CEO with a multi-year capital investment to:
- Create a theory of change as a blueprint for program intervention and outcomes measurement.
- Develop a performance measurement system to track progress toward those outcomes.
- Nurture a performance culture that uses data to understand program progress, build knowledge and correct performance gaps.
First, CEO management had to agree on a theory of change and the specific outcomes for which the organization would hold itself accountable. Next, management shared the theory of change with staff and demonstrated how each staff member contributed to its achievement through an all staff event, follow-up trainings and consistent messaging that the organization was entering an exciting period of change. CEO then adopted a new performance measurement system to reinforce the theory of change.
But reorienting the organization was not easy. Not everyone was ready to embrace a new culture of performance accountability and data tracking. CEO management was initially surprised by staff resistance and responded impatiently with compliance measures. Looking back, not enough time was invested in staff training and promoting the value proposition of new changes. At times it was an enormous effort to get front line staff to track and use data everyday to ensure participant goals were being met.
But the tipping point came when CEO promoted early adopters of the data system to management positions. These new managers were comfortable operating in a data-driven environment and holding others accountable to use data to track program participants’ progress. Once there was a group of strong managers in place, CEO’s performance culture started to take hold and program outcomes improved.
By 2010, CEO was managing to annual performance targets and short-term outcomes through staff’s real-time documentation and data analysis.
In 2012, the results of a three-year randomized control trial showed that CEO’s program resulted in a reduction in recidivism of 16-22%. But the evaluation also uncovered a need to improve CEO’s strategies for advancing long-term employment and for connecting individuals to the full-time labor market. In response, CEO created a job retention unit and developed innovative job retention strategies, including training programs and financial incentives for participants.
In 2013, CEO entered the New York State Social Impact Bond, the first state-sponsored transaction, through which CEO will serve 2,000 high-risk parolees in New York City and Rochester between 2014 and 2018. If CEO hits benchmarks and reduces the use of prison and jail beds by program participants, investors will be repaid their principal and will receive a return of up to 12.5% by the U.S. Department of Labor and New York state.
The tenets of a performance based culture — supportive leadership, disciplined managers, goal setting, data collection and analysis to track and improve outcomes — are now fully accepted by CEO staff and reinforced by management. CEO now has a highly developed system of tactical performance management, which allows the organization to know on a daily basis if it is delivering on its promise to its participants.
Photo Credit: Australian Paralympic Committee
In today’s Social Velocity interview I’m talking with Mary Kopczynski Winkler, senior research associate with the Center on Nonprofits and Philanthropy at the Urban Institute. Mary is a nationally recognized expert in the field of performance measurement and management. She is a founding member of the Leap of Reason Ambassadors Community, a private community of nonprofit thought leaders and practitioners committed to increasing the expectation and adoption of high performance in the social sector and who released the Performance Imperative earlier this year.
You can read past interviews in the Social Velocity interview series here.
Nell: PerformWell is an effort among Urban Institute, Child Trends and Social Solutions to offer tools and strategies for human services nonprofits to measure their work. How successful has this effort been and what are your plans for continuing to grow the capacity of nonprofits to measure their work?
Mary: PerformWell is a free, interactive, web-based resource designed to help human services nonprofits gain knowledge about performance management, access tools and resources they need to better service clients and meet outcomes, and obtain strategies for effective, efficient service delivery. Launched in March 2012, the demand for PerformWell has exceeded our expectations with more than 400,000 visitors (from all 50 states and more than 200 countries); 25,000 individuals have registered for our webinars; and more than 140,000 assessment tools have been downloaded from our site. Webinar survey results are routinely high, but we are working to put additional systems in place to track how nonprofits are using various aspects of PerformWell and to what end.
In 2013, the PeformWell partners engaged in a business planning process with Root Cause. Market research confirmed our views about a large unmet need for performance measurement knowledge and high interest in the resources offered through PerformWell, but that additional products and services are also desired, such as webinar training series, regional user conferences, and customized engagements with nonprofits. Users wanted a more interactive web-experience.
Our short- to medium-term goals include substantial updates to the website to improve the user experience (we also plan to solicit user feedback during and after these changes are implemented); development of additional products and services better aligned with the feedback obtained from the market research undertaken by Root Cause; and exploration of partnerships and sponsorships with nonprofits, consultants and funders to generate additional revenue and resources to expand the content, reach and use of PerformWell to improve the adoption and application of performance measurement and management practice across the nonprofit sector.
Nell: Some believe that measurement is perhaps more straightforward for human services nonprofits — you can measure change to an individual’s behavior or life circumstances — but measurement is more difficult for arts organizations or advocacy groups. What are your thoughts on that?
Mary: Sometimes I think this argument serves as a convenient excuse for organizations to avoid putting even the most basic systems in place to track progress or otherwise hold themselves accountable to their constituents. In 2007, with support from the Hewlett Foundation, the Urban Institute and the Center for What Works, we published a series of simple frameworks, as part of our Outcome Indicators Project, to help nonprofits in 14 program areas engage in performance measurement. Two of these areas are advocacy and performing arts. The Urban Institute also provided research support to the Performing Arts Research Coalition (PARC) to develop standardized surveys to help performing arts organizations across the country obtain more routine and better data from audience members, subscribers, and the community.
Establishing a causal link between advocacy or arts interventions and impact is, in my view, more challenging than for human service organizations. In the case of advocacy organizations, it can be very difficult to isolate the contributions of a particular campaign or even organization to a policy or legislative outcome.
It is, however, possible to devise strategies for capturing information on earlier stage outcomes, such as increased awareness.
I recently participated on a panel at the annual OPERA America conference – on “internal metrics for civic impact.” As much as measurement activities have evolved from the days of the PARC coalition, I observed that most of the metrics and data points were still very internally focused on measures of participation and attendance and fall well-short of anything approximating community or civic impact. I encouraged those present to consider stepping away from a focus on the impact of an individual opera company’s contribution to civic impact, and recommended instead more of a collective impact approach in collaboration with other arts, civic, and education organizations in a community.
In this case, I even hesitated to use the word “impact,” and suggested the group consider distinguishing between collective contribution toward a modest set of civic outcomes (e.g., performing arts promote understanding of other cultures or are a source of pride for those in the community) and the more traditional causal attribution usually reserved for the term “impact.”
Nell: Caroline Fiennes, among others, has argued that individual nonprofits should actually do less evaluation and rather rely on larger research studies to prove their theories of change. What do you make of that argument and the difference between evaluation and measurement?
Mary: I agree with some of what Caroline puts forth here – particularly her observations about “withholding (unflattering research) and publication bias” – an issue that University of Wisconsin-Madison professor Donald Moynihan has termed “performance perversity.” I also agree both with her suggestion that evaluations be done by a third-party to reduce any tendencies toward subjective reporting or bias and her endorsement of a greater consideration of shared metrics.
I am troubled, however, by the fact that only 7% of UK social-purpose organizations are interested in improving services, and her somewhat cavalier suggestion that monitoring and evaluation “wastes time and money.” Although she is not alone in this second argument (see for example Bill Shambra’s “take-down” of Charity Navigator’s efforts to encourage greater use of performance metrics in “Charity Navigator 3.0: The Empirical Empire’s Death Star?”), such sweeping generalizations undermine the legitimate and courageous attempts of many nonprofits to use data for program improvement efforts.
I agree with Phil Buchanan in that there is a “moral imperative” to make an honest attempt to understand if resources are being used effectively and certainly to guard against the possibility that programs could be doing more harm than good as organizations like Latin American Youth Center and Harlem Children’s Zone have discovered and since corrected.
I see measurement as a necessary practice for every nonprofit. But measurement is different from evaluation. Nonprofits need to start by developing a measurement infrastructure that makes sense for their organization – one that supports their mission and commitment to serve and improve the lives of their clients or constituents – not one that is reactionary and responsive to funders. It is precisely this kind of infrastructure that can lay the groundwork for a more rigorous evaluation, at a time that is right and appropriate for the organization’s stage in development.
I see measurement and evaluation along a continuum of inquiry that should be designed to support the learning objectives of an organization. Measurement helps organizations to take the day-to-day or month-to-month pulse of various activities and program results – these snapshots in time or scorecards help managers and service providers understand trends and provide an opportunity to correct, modify or otherwise adapt operations.
Evaluation is, by definition, more rigorous, more expensive, and takes considerably more time to see results. Evaluation serves a very important role as organizations make decisions about whether to continue, grow, scale or otherwise expand services, but it needs to occur at the right time – and certainly not as an organization is just getting off the ground.
Nell: It is difficult for most nonprofits to find funding for measurement work. For example, in the most recent Nonprofit Finance Fund State of the Sector survey, 69% of nonprofit respondents said their funders rarely or never cover the costs of measurement. How do we change that, or can we?
Mary: Although I am sympathetic to this argument and argue frequently that foundations have a unique and critical role to play in helping to build the capacity of nonprofits to better engage in measurement and evaluation, I think we need to change the conversation to one that focuses on the shared responsibility between nonprofits and funders for making the necessary investments in measurement and evaluation.
If nonprofits are truly ready to embrace a culture of measurement and high performance, then they need to reorganize operations in ways that embed measurement practice at every level of the organization, and change expectations from front-line workers all the way to the board of directors.
This means things like: defining expectations about data collection in job descriptions; setting aside a small percentage of funding for evaluation as a line-item in every grant request; and using data in meaningful ways in everyday discourse. Likewise, funders need to work more collaboratively with grantees to understand the data needs and capacity of nonprofits, consider funding longer-term grants that build in support for measurement and evaluation, and stop asking for data or reports that aren’t part of the conversation about continuous improvement and learning. Funders, too, can support field-building efforts to develop additional tools and resources in support of the measurement work nonprofits seek to accomplish.
There are a number of exemplary efforts already underway including Edna McConnell Clark Foundation’s Propel Next and the World Bank Group’s support of Measure4Change and the East of the River Initiative. Each of these efforts feature: targeted grants to build measurement and evaluation capacity of participating nonprofits; access to technical assistance resources; and a community of practice to help grantees learn from each other, share successes or failures, and reduce what is all too often a sense of isolation among measurement and evaluation practitioners.
Photo Credit: Urban Institute