In today’s Social Velocity interview, I’m talking with Isaac Castillo, Director of Outcomes, Assessment, and Learning at Venture Philanthropy Partners, where he leads VPP’s approach to data collection, data reporting, and outcome measurement.
Prior to coming to VPP, Isaac served as the Deputy Director for the DC Promise Neighborhood Initiative (DCPNI). At DCPNI, Isaac led efforts to improve outcomes in the Kenilworth-Parkside community in Ward 7 of the District of Columbia through the strategic coordination of programmatic solutions and research-based strategies. Prior to his time at DCPNI, Isaac served as a Senior Research Scientist at Child Trends where he worked with nonprofits throughout the United States on the development and modification of performance management systems and evaluation designs. In addition, Isaac was also the Director of Learning and Evaluation for the Latin American Youth Center (LAYC) where he led the organization’s evaluation and performance management work.
You can read interviews with other social change leaders here.
Nell: You have spent your career using data to improve the performance of the nonprofits for which you worked. Why do you think performance management is so important for nonprofits? Do you think all nonprofits should pursue performance management? When does it make sense and when doesn’t it?
Isaac: I believe that every nonprofit should pursue some form of performance management because they owe it to the clients they serve. Most nonprofits will assume that they are making a positive difference in people’s lives, but in the vast majority of cases they are just guessing. Using some form of performance management will allow every nonprofit organization to confirm this thinking and to identify areas that can and should be improved so that the next cohort of participants can get better services than the last.
Unfortunately, one of the greatest challenges preventing a nonprofit from implementing some form of performance management isn’t a lack of resources, expertise, or time. It is fear. The fear that they will find out that their work isn’t having a positive effect. This fear is what nonprofit leaders need to overcome, not for the benefit of themselves or their organization, but because they owe it to the clients they serve today and the clients they will serve in the future. I believe that every nonprofit should strive to serve tomorrow’s clients better than today’s clients, and one of the only ways to ensure that this happens is the sustained use of performance management.
The type of performance management that each nonprofit should pursue should vary by the size and scope of their work. At a minimum, small nonprofits should be tracking basic demographic and attendance information on their participants, and hopefully at least one meaningful output or outcome. Whether this occurs in a computerized system or in a spiral paper notebook is up to the nonprofit. But it doesn’t have to be costly, and it doesn’t take expertise. It only takes the will and desire to improve as a nonprofit.
Nell: In the nonprofits in which you’ve worked how have you been able to secure resources to fund performance management? What is the case you and your colleagues have made to funders and what do you think it will take to get more funders investing in performance management?
Isaac: Raising funding for performance management work usually takes a mix of several different strategies and approaches for potential and existing funders.
First, I strongly encourage nonprofits to include some percentage (1 to 5 percent – possibly more) of funding in each grant submission or proposal dedicated to supporting performance management and outcome measurement work. By placing this small percentage into each proposal, a nonprofit can begin to raise funds for internal evaluation and performance management activities. It may not seem like a lot, but it can add up, and eventually generate enough funds for a half-time or full-time position to support in-house performance management work.
Second, I also strongly encourage nonprofits to engage in regular ‘funder education’ – where a nonprofit proactively meets with their funders to have ongoing conversations about outcome measurement and evaluation. This allows both the funder and the nonprofit to come to agreement on measurement expectations and to ensure that both groups are focused on the same concepts. I often suggest that the first of these types of meetings focuses on each group’s definitions of three commonly misunderstood terms: outputs, outcomes, and impact.
Finally, I would recommend that the nonprofit and funder have an honest discussion regarding expectations of results and the funding necessary to support the related evaluation work. If a funder is expecting an random control trial (RCT) to be completed to determine ‘impact,’ then the nonprofit should be willing to push the funder to support a large investment to pay for a high quality evaluation. If the funder is only willing to support a small amount for outcome measurement, then the nonprofit should clearly articulate what is possible.
Nell: Ken Berger and Caroline Fiennes recently argued that we may have gone too far by asking nonprofits to produce research about their own outcomes. What’s your response to that argument?
Isaac: I fully support Ken and Caroline in their argument that most nonprofits should stay away from trying to produce impact research. The desire for ‘impact’ is something that has been (and continues to be) pushed unfairly (and without financial support) by the funding community.
I honestly think a lot of confusion in this space comes from inconsistent use and understanding of the term ‘impact’. The term ‘impact’ has a precise definition among researchers but is often used in a much broader context among funders, nonprofits, and the general public. In the research and evaluation world, impact is used to describe the effectiveness of a program while eliminating as many potential confounding factors as possible. That is why the use of random control trials (RCTs) is usually the cornerstone of impact research – RCTs are the easiest way to control for and eliminate confounding factors.
When most non-researchers use the term ‘impact’ however, they are usually just asking if the program or organization works and if it is making a difference for its intended service population. That is a much lower bar to set, and yet it is a critical distinction in this discussion. If you are thinking about ‘impact’ as a researcher, you will need a large amount of resources and expertise to determine ‘impact,’ which usually means completing one or more formal evaluations. If you are thinking about ‘impact’ in the more general sense and less strict way, then pursuing some form of performance management system will allow a nonprofit to determine if their efforts have been successful.
I do think every nonprofit should pursue some form of performance management to ensure that their work is having a positive effect as a complement to existing research that others have done. Relying only on the use of others’ research does not guarantee that a nonprofit will provide effective services and achieve positive outcomes. This type of research is a like a recipe – it shows what has worked in the past and provides a guide for the nonprofit – but a recipe can still be ruined with poor implementation or planning.
Every nonprofit has an obligation to the people they serve (and not to their funders) to ensure that their programming is having a positive effect (or at the very least not causing harm). Without some form of performance management system in place (even one that just uses paper and pencil), a nonprofit will never know if they have strayed too far from the recipe provided by previous research.
I also think there are a growing number of very sophisticated nonprofits that should be using AND producing research on effective programs. Every year, I see more and more nonprofits that hire talented and unbiased researchers dedicated to internal evaluation and outcome measurement work. These individuals are just as talented and unbiased as their colleagues working in traditional research and evaluation organizations. They can, and should, produce original research that can help inform the nonprofit field. The real challenge comes in nonprofit organizations finding the resources to support the hiring and retention of these individuals. Not every nonprofit will have the resources or capacity to hire one or more of these individuals – but those that do should absolutely be trying to produce original outcome and impact research to provide ‘recipes’ for effective programming that nonprofits with fewer resources can use in the future.
Nell: Your former organization, DC Promise Neighborhoods, is part of the national Promise Neighborhoods Initiative launched by the US Department of Education in 2010 and modeled after the famous Harlem Children’s Zone. How successful has this national replication of a successful local model been? Have you been able to replicate outcomes? And what hurdles, if any, have you and other replication sites found?
Isaac: I think that there has been some initial success among the Promise Neighborhoods. Part of the challenge that all the Promise Neighborhoods face is that the Harlem Children’s Zone did not achieve their success overnight. They have been working in Harlem for decades, so it would be unrealistic to believe that the Promise Neighborhoods would be able to create large scale change in a matter of a few years.
However, there are signs of progress across all of the Promise Neighborhoods. Each of the Promise Neighborhoods started to address a few outcomes with the initial round of funding, and these outcomes varied. Some focused on math and reading proficiency for students, some focused on obtaining medical homes for young children, and others sought to increase the amount of healthy food consumed by residents. In DC, we focused on improving school attendance.
I do think that most of the 12 Promise Neighborhood Implementation grantees were able to make progress on the outcomes they identified as initial focus areas. However, the very nature of the work (creating community level change) doesn’t lend itself to the rapid accomplishment of multiple outcomes in a short period of time. Each of the Promise Neighborhoods had to prioritize certain outcomes for their respective communities, and only several years later are they able to claim success and begin to identify the next set of outcomes to be addressed. So while certain outcomes haven’t necessarily been replicated across all the Promise Neighborhoods, that is due to the differences in priorities and community conditions rather than any problem with the model itself.
Photo Credit: Venture Philanthropy Partners
Note: I was asked by The Center for Effective Philanthropy to review their latest research report, Sharing What Matters: Perspectives on Foundation Transparency, released in late February, and provide my thoughts about it for their on-going blog series on the report. Below is my post which originally appeared on the CEP blog.
Sharing What Matters: Perspectives on Foundation Transparency provides some startling data about the state of transparency in the foundation world.
While for the most part, foundation leaders recognize the importance of transparency and are trying to be more transparent, the report shows there is still much work to do.
To me, this question of foundation transparency is part of the larger, ever-present power imbalance in the nonprofit sector between those with money (funders), and those who seek that money (nonprofits). Funders often encourage nonprofits to be transparent about their results and when they have succeeded or failed. But it appears that in these two areas (results and lessons learned), funders are less transparent than either their grantees want them to be, or they would like themselves to be.
This is all critically important because a more transparent philanthropic sector — particularly if foundations were more transparent about how they assess their results and what has worked and what hasn’t — could mean more money flowing to more social change.
CEP’s report delineates two levels of foundation transparency. First is transparency about grantmaking: who leads the foundation, how they have made grants in the past, how they make decisions. The second is transparency about the results foundations themselves achieve: how they assess the performance of their investments, how they share successes and failures.
This second (and I would argue much more interesting) level of transparency is about foundations reporting the very thing they are often asking nonprofits to report: their performance.
In particular, the research uncovers three stark disconnects:
- Foundations Don’t Share How They Assess Their Performance
Of the foundation leaders surveyed, 61 percent said they believe being transparent about how their foundation assesses its performance could increase effectiveness to a significant extent. Yet, only 35 percent of foundations reported actually being very or extremely transparent about it.
- Foundations Aren’t Transparent about Successes and Failures
While 69 percent of foundation leaders think that being transparent about what’s worked in their grantmaking could increase their effectiveness, only 46 percent report being very or extremely transparent about what’s worked. And transparency about what hasn’t worked is even worse. 30 percent of foundation leaders say their foundations are very or extremely transparent about what does not work, which makes failures the lowest-rated area of foundation transparency. And nonprofits agree that foundation transparency is lowest when it comes to sharing what hasn’t worked.
- Foundations Want to Be More Transparent, But Aren’t
While 94 percent of foundation leaders surveyed say that increased transparency is a medium or high priority at their foundation, 75 percent of foundation leaders say that their current levels of transparency are not sufficient. And shockingly, 24 percent of foundation leaders say that nothing limits their ability to be more transparent. So it’s a big priority, yet it’s not getting done.
The report suggests some reasons why transparency about performance and lessons learned is recognized as important, but still far from ubiquitous in the philanthropic sector:
- Lack of Strategy: Foundations aren’t creating clear enough goals around which they can actually assess their performance.
- Lack of Capacity for Evaluation: Foundations aren’t allocating enough resources to assessing their performance.
- Fear of Diminished Reputation: Foundations are afraid of harming their own or their grantees’ reputations by revealing what has or hasn’t worked.
Surprisingly (or maybe not so surprisingly), these impediments to foundation transparency mimic the hurdles nonprofits find (or place) in their own way. Nonprofits often pour as much money as possible into programs and skimp on investing in organization-building efforts like strategy and evaluation. This bias against organization-building is often encouraged (or demanded) by their funders. And so it appears that funders put these same hurdles in their own way. Perhaps foundations, just like their nonprofit grantees, need to acknowledge that with sufficient investments in smart strategy and performance evaluation, greater results can be achieved.
The third and final impediment to foundation transparency about performance and lessons learned is trickier. Fear of harming the reputations of their grantees by sharing lessons learned is a real issue. Foundations tend to invest in packs. So if a foundation reveals investments that have failed, there is a risk that other foundations will flee.
But if we truly want to move to a place where more resources flow to what works, don’t we have to be more transparent about what worked and what didn’t work? If a foundation investment failed because of the foundation’s shortcomings (the investment didn’t fit with foundation goals, the foundation didn’t invest enough, or it didn’t invest in capacity as well as programs), the foundation (and other foundations learning from these lessons) could learn to become more effective investors. And if the investment didn’t work simply because it was the wrong intervention, then isn’t it better to move investments to interventions that do work? Fear can be a debilitating thing, and for the sake of greater results, I think both foundations and their nonprofit grantees must work to overcome it.
Ultimately, the CEP report is hopeful. It uncovers a desire among both foundation leaders and their grantees to move from a basic level of transparency toward a deeper (and more important) one that reveals performance and lessons learned.
Let’s hope that this stated desire for a change in foundation transparency, and the requisite changes in how foundations invest in strategy and performance assessment and overcome fear, becomes reality.
Photo Credit: The Center for Effective Philanthropy
From an historic blizzard that blanketed the country, to tackling poverty, to the leadership of Black Lives Matter, to technology in the new year, to using social media to stop ISIS, to advice for Charity Navigator, January was an interesting month in the world of social change.
- Winter storm Jonas dumped several feet of snow across the country, but also offered a couple of interesting lessons in social change. First, the sheer amount of snow piled up on east coast urban streets provided a glimpse into better urban design. And after the blizzard hit Washington, DC it seems only female senators were brave enough to come to work. Among them, Senator Lisa Murkowski wondered: “Perhaps it speaks to the hardiness of women…that put on your boots and put your hat on and get out and slog through the mess that’s out there.”
- Writing in the Nonprofit Quarterly, Tom Klaus took issue with those who criticize the Ferguson and Black Lives Matters movements as being “leaderless.” Instead, he argued that they demonstrate a more effective “shared leadership” model: “Shared leadership…means that multiple members of a team or group step up to the responsibility and task of leadership, often as an adaptive response to changing circumstances. Multiple members may emerge to lead at the same time, or it may be serial as multiple leaders emerge over the life of a team or group.” And The Chronicle of Philanthropy profiled three of the leaders of the Black Lives Matter movement.
- One of my favorite bloggers, David Henderson, has made a new year’s resolution to write more often. Let’s hope he keeps it up because he offered us two great ones this month. First, he wrote a scathing critique of the nonprofit and philanthropy sectors for not standing up against presidential candidate Donald Trump’s hate-filled ideology. And then he took it further in a later post arguing that the philanthropic sector must get more political: “It seems a strange consensus that philanthropy and politics do not mix. Yet it is our politics, and more specifically our collective values, that creates the maladies we aim to address. Martin Luther King was a civil rights pioneer not for creating a nonprofit that provided social services to help African Americans live a little better, but by challenging the laws and social values that subjugated a significant portion of our community. Social interventions like homeless shelters, food pantries, and tutoring programs are fundamentally responses to injustice. While these programs are wrapped in apolitical blankets, they are plainly and intuitively critiques of the system we live in.”
- And speaking of critiques, columnist Tom Watson wrote a sharp commentary on American philanthropy arguing that it is going the way of American politics — moving from democracy towards plutocracy: “The disparity between democratic philanthropy and its plutocratic cousin is nowhere more apparent than in the importance placed on the Facebook co-founder’s commitment to giving away much of his vast personal fortune compared with the potential of the largest digital social network in the nation. Mr. Zuckerberg’s billions may create major causes and eventually steer public policy, but many nonprofits will struggle to find in their budgets the money required to purchase desperately needed social-media eyeballs from his advertising department. If there’s a better example of the power gulf in American philanthropy, I’m not sure what it is.”
- And other critiques of philanthropy in January went even further, with some arguing that modern American philanthropy attempting to address growing wealth inequality (illustrated by a new Oxfam infographic “An Economy for the 1%“) is a paradox because philanthropy itself emerged from the wealth excesses of capitalism. A new book by Erica Kohl-Arenas argued that philanthropic interventions to solve poverty have been flawed because they don’t address the structural issues causing the poverty in the first place. And her argument was extended when she wrote about her view of a January 7th public event at the Ford Foundation where Darren Walker (who recently announced a new foundation focus on overcoming poverty) and Rob Reich discussed these issues.
- Caroline Fiennes argued that nonprofits should not try to “prove their impact,” since proof of impact is impossible, but rather use evaluation to gain knowledge that can help “maximize our chances of making a significant impact.” Patrick Lester, writing in the Stanford Social Innovation Review, offered a similar caution about outcomes, but this time to the Obama administration: “A dose of…realism, combined with a greater reliance on evidence and a willingness to learn from the past, could transform the administration’s focus on outcomes into an important step forward. By openly acknowledging the challenges and dangers, recognizing the difference between mere outcomes and true impact, and demonstrating how this time we will do better, the administration could show that what it’s really calling for is not just an outcomes mindset, but an Outcomes Mindset 2.0.”
- Speaking of proving results, Charity Navigator’s new leader, former Microsoft exec Michael Thatcher, and the board that hired him came under attack in January for not moving quickly enough away from rating nonprofits on financials and towards rating them based on results. But Doug White, writing an opinion piece in The Chronicle of Philanthropy and who created the beginning data behind Charity Navigator many years ago, took it even further took it even further: “Charity Navigator is far worse than nothing. The best that could happen is for the group to sink into oblivion, with no charities, no news outlets, and no donors giving it any thought. Or the group could take serious steps to grow up, humbly taking the time and effort to truly try to understand the charitable world.”
- Wanting to get further into the social change game, Facebook COO Sheryl Sandberg announced a new effort to use Facebook “Likes” to stop ISIS recruitment efforts on social media. It will be interesting to see how effective this slacktivism effort becomes at creating real change.
- Kivi Leroux Miller released her annual Nonprofit Communication Trends Report, including lots of data about how and where nonprofits are marketing. And while she found that YouTube is currently the #3 social network for nonprofits, that may change since YouTube just announced new “donation cards” that allow donors to give while watching a video.
- And finally, in January we lost David Bowie. But Callie Oettinger urged us not to be sad, but rather, inspired: “I [am] comforted in thinking of Bowie…on Mars, mixing it up with other artists…a place where the greats go to keep an eye on the rest of us and send down jolts of inspiration from above.” Yes.
Photo Credit: Northside777
In today’s Social Velocity interview I’m very excited to be talking with the co-founders and editors of the new History of Philanthropy blog: Benjamin Soskis, Stanley Katz, and Maribel Morey.
The HistPhil blog launched this past June and focuses on how history can shed light on current philanthropic issues and practice.
Because how can we hope to create social change without understanding the results of efforts that came before us?
Ben, Stanley, and Maribel are all academics with specialities related to history and philanthropy. Stanley is on faculty at Princeton’s Woodrow Wilson School and has also taught at Harvard, Wisconsin and Chicago. Benjamin is a Fellow at the Center for Nonprofit Management, Philanthropy and Policy at George Mason University and a consultant for the history of philanthropy program of the Open Philanthropy Project. And Maribel is a professor of history at Clemson University and is currently writing a book, From Tuskegee to Myrdal, which describes how and why white Americans in big philanthropy transformed from proponents of segregated education to advocates of racial equality.
Nell: Stanley, you write, in your inaugural post for the HistPhil blog, about the tendency of philanthropy to get swept up in “new” approaches that actually aren’t all that new. Is there really anything new in philanthropy right now? Are there any structural or cultural developments or approaches in philanthropy that are significantly different than in the past?
Stanley: It is hard to separate rhetoric from reality in the current environment of philanthropic hype. From my perspective, the current boasting that all is new in philanthropy (see the recent New York Times “Giving” section), is pretty uninformed (naïve?).
One of the most common claims, repeated frequently in the New York Times piece, is that philanthropists are no longer simply trying to alleviate the “symptoms” of distress, but in fact are aiming to remove the underlying causes of social and physical problems. This attempts to distinguish what the large foundations are doing from what the traditional foundations did in the 20th century (and of course no one is making this claim more loudly than Judith Rodin of the “new” Rockefeller Foundation.)
But the emphasis on the elimination of problems by identifying their root causes was the innovative claim of the founders of the first American foundations, best articulated by Andrew Carnegie and John D. Rockefeller, Sr. So from this point of view there is not much new in the current aims of big philanthropy.
But what is actually new, and there is a lot that is new, is the determined focus on short-term, measurable, results — this is the mantra of the genuinely new “strategic” philanthropy. The older foundations of course aimed to be effective, but they defined effectiveness much more loosely and measured it less precisely than current large foundations. This is an enormousdly important attribute of the current mega-foundations, and all the other foundations that have jumped on the “strategic philanthropy” bandwagon.
The current foundation rhetoric also makes use of a wide range of business metaphors, none more important than the notion that philanthropy is best thought of as “investment” in change, and frequently characterized, using the language of hedge funds, as “bets” on successfully producing change. Much of the current language of philanthropy is drawn from venture capital activity, and the new philanthropy can also be thought of as “venture” philanthropy. This is a new attitude.
The original philanthropists knew they were adapting the then modern techniques of business organization and management to their grantmaking, but they thought of philanthropy as different from business. That distinction seems to have eluded much of the current generation of philanthropists.
But I need to say that I am a little uncomfortable with these large generalizations, since not all current philanthropists speak or act as I have just suggested — nor did the earliest generation of philanthropists. But there is something new in the philanthropic air. The question is whether that air is as salubrious as its current advocates claim.
Nell: Stanley, philanthropy got its modern day start in the missionary work of Europeans and Americans in third world countries. What, if any, parallels do you see in philanthropic work in developing parts of the world today?
Stanley: Here the important fact is that the Rockefellers (John D. Sr. and Jr.) originally intended the Rockefeller Foundation to be a missionary foundation, operating mostly (possibly entirely) in China. For a variety of reasons, in particular the influence of their advisor Frederick T. Gates (a minister who had turned in a secular direction), they abandoned the missionary focus in favor of a secular focus. Their work in China, and especially the founding and support of the Peking Union Medical School, continued to have a missionary flavor, but their work in Africa and other tropical areas was more early medical philanthropy than missionary philanthropy. They turned to the eradication of tropical diseases both because they were attractive to current medical research capacity, and because it was politically safe to engage in medical experimentation abroad — a lesson that Big Pharma learned from them later in the century.
But the emphasis of the large foundations, beginning in the 1960s, with grant-making in the underdeveloped world, was quite different, and unrelated to any neo-missionary instinct. Many of the large American foundations at mid-century thought they could assist the process of decolonization and local self-determination by supporting a wide range of development activities in what was then called the Third World. They later came to be attacked by neo-Marxists for allegedly supporting US and Western imperialism in the developing world, but that is a big subject all in itself.
Ironically, there is now a burgeoning effort by American evangelical business people to invest in private development projects, especially in East Africa, and this is a throw-back of sorts to much earlier notions of philanthropic support of development. But it needs to be contrasted with the massive Gates Foundation public health efforts in Africa and elsewhere — an effort purely “strategic” in its inspiration.
Nell: Ben, historically, philanthropic giving has not grown above 2% of US GDP, why do you think that is and do you think there is any hope of changing that?
Ben: The answer to the 2% conundrum is the holy grail of the nonprofit sector, and I don’t pretend to have any certain answer about it myself. It’s worth noting, though, that 2% of GDP is still pretty good relative to other developed countries (in fact, by many measures, it’s one of the best rates). But it’s still confounding why it hasn’t budged for more than four decades. There’s obviously a tangle of causal factors at play, and I’ll just offer a few possibilities that have occurred to me in the course of my research, without making any claims that this is an exhaustive list.
Given the persistence of that rate, it makes sense to look for some equally persistent characteristic of the American nonprofit sector that has also remained unchanged over that long timespan. A recent article in the Chronicle of Philanthropy can give us a clue to a possible candidate. As part of their Philanthropy 400 ranking of the nation’s largest nonprofits, they note how little the list has changed from when it was first tallied in 1991 (especially when compared with the churning of the list of the largest for-profit companies). In part by dint of habit, and in part because of the power of the institution’s “brands,” Americans have tended to stick with a handful of large charities—through scandals, evolving social needs and changing fads.
As I pointed out to the Chronicle reporter (though my observations got a bit lost in translation; Josephine Shaw Lowell, a founder of the American charity organization movement, wouldn’t have suggested that bigger is better, only that a degree of centralization in charity administration was necessary), we can trace this development back to the turn of the last century, when charity reformers instituted a process of centralized, bureaucratized and professionalized giving. That is, from the late 19th century-scientific charity movement onward, individuals were warned that their disparate giving was too often haphazard, scattered, wasteful, and overlapping, and so were encouraged to hand over the administration of charitable resources to a centralized institution. The community chests and the United Way came out of this impulse; Catholic Charities succumbed to it as well.
It’s very possible that the development toward more centralization and professional administration has bolstered American giving by providing citizens with more confidence and by making decisions about where to give easier. But I think we also have to wonder whether it imposed a sort of cap as well, since it might have removed some of the immediacy, intimacy and individuality from the charitable exchange that could push individuals to give beyond an initial comfort point (which very well might be around 2%).
The Chronicle suggests that we might see more disruption in the list in the coming years, or at least that some of the big names, like the United Way, might be ceding ground. If that is the case, and if some of the space they occupied is filled with smaller upstarts, it’s possible we might see some movement beyond 2%.
Another possible factor worth considering for the persistence of the 2% rate is the declining role of religion in determining charitable allocations. I don’t only mean that the percentage of total giving going to religious institutions has been steadily declining over the last few decades. But also that giving itself has, for many Americans, become an increasingly secular activity.
Again, we can trace this back to the early 20th century, when charity reformers sought to “secularize” giving by stripping it of any sectarian taint and endowing it with a degree of rationality; the indiscriminate giver in their rhetoric was often an easily-duped priest. But it is also possible that the religious impulse to give is more easily able to push past the equilibrium of 2% and to ask individuals to make even deeper financial commitments.
Yet another factor preventing giving from crossing that 2% barrier might be media coverage of nonprofits. As I quipped in an article on the subject in the Chronicle last March, borrowing from Woody Allen, the coverage is generally pretty weak—and the portions are too small. That is, the media grants the sector relatively little attention, and when it does, it seems to suffer from what New York Times reporter David Clay Johnson has called a “Madonna-whore” complex: alternating between feel-good human interest stories and stories focused on nonprofit abuse. But stories that chronicle the difficult and important work many nonprofits are doing on a daily basis—they just don’t have the journalistic juice to make it into print. As the former nonprofit beat reporter for The New York Times, Stephanie Strom, told me, “A nonprofit just doing good isn’t news because everyone knows nonprofits are supposed to do good.” This might be changing, with a number of important online journalistic ventures out there, but I think there is a deep deficit in public knowledge about what nonprofits are doing—and this deficit could sap the public’s willingness to give more.
You also have to combine this media deficiency with the general conceptual muddle that has emerged with the blurring of private and public lines of funding social welfare provision in the last half century. Not only do American givers and tax-payers have to contend with a federated system (to say nothing of international structures of governance), in which various jurisdictions take up differing responsibilities for addressing social ills and needs. But we also inhabit what political scientist Jacob Hacker has termed a “divided welfare state,” in which public and private lines of responsibility for social welfare are increasingly blurred. Obviously, there’s opportunity in this blurring. But as scholars such as Lester Salamon have pointed out, it also can represent a sort of existential threat to the nonprofit sector’s distinctive identity and mission, which in turn might be restricting American’s willingness to dig in and give more.
Finally, it’s worth pointing out another powerful strain in the American charitable tradition—the devaluation of monetary gifts themselves in favor of the “helping hand.” At the turn of the last century, even while scientific charity reformers were attempting to rationalize giving, they were also trying to preserve traditions of neighborly assistance. The fact that such assistance could not be easily quantified and rationally appraised was regarded as a mark of its worth. And in many senses, it was considered a higher form of giving than monetary contributions. That idea is still with us today; and it’s possible that by focusing too much on the 2% rate, we miss other forms of voluntarism that have had more variability and elasticity over the years.
Nell: Maribel, during the Gilded Age great wealth concentrated among a few brought large philanthropy (Carnegie, Rockefeller, etc.) but also contributed to a subsequent progressive period (as the pendulum swung back against that excessive wealth). Do you see parallels between the Gilded Age and today, and do you think we are heading for a more progressive period? And what role do you think philanthropy will or won’t play in that?
Maribel: Indeed, many late nineteenth- and early-twentieth century Americans looked at Andrew Carnegie’s and John D. Rockefeller’s wealth (and even their philanthropy) with some suspicion.
Reflecting these Americans’ anxieties, for example, the United States Congressional Commission on Industrial Relations called John D. Rockefeller Sr. and his son in 1915 to defend the independence of the Rockefeller Foundation. As many scholars have noted, the Rockefellers had established a division of economic research in 1914 within the one-year-old foundation; and a few months later, the Ludlow massacre occurred at the Rockefeller’s Colorado Fuel Iron Company where women and children died when the state militia assaulted the strikers’ tent camp.
In response, the organization decided to organize a study on industrial relations under this new division and selected a close working friend of John D. Rockefeller Jr. (William Lyon Mackenzie King) to direct it. From the perspective of the American public, it was hardly easy to trust that gilded age tycoons who had undermined the rights of workers in the process of accumulating their wealth would have the interests of the people in mind when they funded social scientific projects to study the American populace. From this perspective, the Rockefeller Foundation was the playpen of industrialists who had defined interests in society and their policy-oriented social scientific research would be—far from disinterested—an extension of those interests.
And far from ignorant of Americans’ suspicions about gilded age levels of wealth, Andrew Carnegie himself discussed it head-on in The Gospel of Wealth (1889). Aware that Americans might find socialism an attractive alternative to capitalism, for example, he pitched philanthropy as the better form of wealth redistribution.
Today as then, Americans are confronting and discussing the great influence of leading philanthropists in public policymaking and of wealth inequality more broadly. However, I am not convinced that we are necessarily heading for a more progressive period.
I say this because I don’t see contemporary Americans reflecting the same level of angst about elite philanthropy nor with the broader topic of wealth concentration. Congress isn’t questioning leading philanthropists as it did with the Rockefellers in the early twentieth century nor do leading philanthropists seem threatened by Americans’ potential voting patterns, as Carnegie had been.
One key explanation might be that these earlier Americans entertained a vastly different meaning of American democracy than their successors today. For them, American democracy promised economic opportunity (or rather, freedom from class divisions) and an equal voice over public concerns. Today, it seems that the general American public and their representatives in Congress aren’t as convinced of this definition of American democracy. With a narrower understanding of American democracy, it might simply be more difficult for contemporaries to see how wealth inequality and elite philanthropy in public policymaking are democratic threats.
Philanthropies committed to resurrecting a more progressive period might just need to focus on ways to revive this earlier (dare I say, more robust) definition of American democracy and help empower Americans to fight for it.
Photo Credit: HistPhil
I guess I am on a case study kick this week. I do think that actual examples of the paths other nonprofits followed in order to become more effective or more sustainable can be really helpful to other nonprofit leaders in the trenches. So in that spirit, I offer a case study of a small, startup nonprofit ready to grow their impact and their sustainability.
The thing I love about my job the most is that I get to work one-on-one with super smart people who are coming up with innovative solutions to making the world a better place. In particular, lately I’ve been lucky enough to work with some groups in the civic technology space, a really exciting emerging area where innovative technology solutions are used to make government, and ultimately democracy, more effective.
One of these groups, The Engaging News Project (ENP) is a startup nonprofit aimed at helping news organizations better meet their democratic and business goals in a digital age.
While ENP enjoyed success and the support of some key funders over the past two years, they were ready to move from the project phase to an established organization with sustainable funding and a long-term strategy for achieving impact on the digital news industry.
So ENP hired me to lead their strategic planning effort. With my guidance, ENP created an advisory group of staff and key stakeholders. I led the group to analyze the external environment in which ENP operates, develop their theory of change, define the audiences they want to target, and articulate the goals and objectives and corresponding financial projections of the next 3 years for the organization. I also helped staff create a year 1 operational plan to help execute and monitor the strategic plan.
The end result was a clear 3-year strategic plan with accompanying financial model and an engaged and excited staff and group of advisors.
Because of their new strategic plan, ENP has focused their project development efforts, clearly defined where and with whom they want to work, and detailed their goals for the next 3-years.
They are now working to implement the strategic plan. They are identifying new funders to help support the growth of the organization, expanding their collaborative partners, creating a formal advisory board, and streamlining operations. ENP staff are excited about the new direction and are actively working to have a greater impact on the future of digital news.
As Talia Stroud, Director of the Engaging News Project put it,
As a new entity, we had been doing more of the day-to-day work and hadn’t taken the time to think about the bigger picture of where the Engaging News Project was headed and how to get there. Social Velocity helped us to chart a future direction, hone our messaging, and develop a clear plan for our organization. By working with us to figure out our targets, potential collaborators, and goals, Social Velocity helped us to systematically figure out a strong path forward. I can’t wait to see what we’ll be able to accomplish with these plans in place.
I’m excited to see where the Engaging News Project goes from here and the growing impact they will have on our democracy.
Photo Credit: Engaging News Project
This year on the blog I have been highlighting the Performance Imperative, a detailed definition of a high-performing nonprofit released by the Leap Ambassador community (of which I am a member) in March. Today I continue the ongoing blog series describing each of the 7 Pillars of the Performance Imperative with Pillar 3: Well-Designed and Implemented Programs and Strategies.
You can also read about Pillar 1: Courageous, Adaptive Leadership, and Pillar 2: Disciplined, People-Focused Nonprofit Management.
Pillar 3 describes being crystal clear about what your nonprofit exists to do, how you fit into the external environment, and how you develop and execute smart programs that result in your desired social change. This Pillar is essentially about creating and executing a Theory of Change.
The most important part, in my mind, of Pillar 3 is encouraging nonprofits to define the target population(s) they aim to serve. I have seen too many nonprofit organizations so focused on doing good that they don’t define who they are best positioned to serve and how that relates to who else may be serving them. Nonprofits must get clear about their place amid other services and interventions and, very specifically, who they are hoping to benefit or influence.
As always, you can read a larger description of Pillar 3 in the Performance Imperative (and I strongly encourage you to do so), but, in summary, a nonprofit that exhibits Well-Designed and Implemented Programs and Strategies:
- Is clear on the target population they serve.
- Bases the design of their programs on evidence informed assumptions about how the organization’s activities can lead to the desired change (a“theory of change”).
- Designs programs with careful attention to the larger ecosystem in which they operate.
- Implements their programs in a consistently high-quality manner and views collecting and using data as part of implementing high-quality programs.
- Guards against the temptation to veer off course in search of numbers that look good in marketing or funder materials.
Because I think case studies are so critical to understanding what high performance really looks like in a nonprofit, I asked Sam Cobbs, CEO of First Place for Youth, to explain how he led his organization to become a national model for helping foster kids to thrive.
Here is his story:
First Place went through an intensive theory of change process in 2008 where we explored what impact we wanted to make with youth and what type of activities and interactions it would take to achieve that impact. In addition, because the activities and interactions needed to be intensive (and therefore costly) we made the decision to focus our services on the most vulnerable youth. This was measured by how at risk a youth was using a risk assessment scale that took into account, among other factors:
- number of foster care placements
- years or days of homelessness
- job history
- education level, and
- the number and quality of support systems, including positive adult role models.
Based on this criteria, youth who had a higher risk factor score were given priority over youth with lower scores.
After establishing our target population, we began to collect data on what activities and interactions youth were having with the organization and started to analyze these trends. We were looking to understand what our population had in common so that we could understand who we were effective with and who we needed to create better interventions for.
Through this work we determined that we had 8 participant types at baseline and figured out which types we worked better with and what interventions were best used with these sub-populations. We then trained staff to deliver the interventions that were shown to work better with certain sub-populations.
We also began to understand that our sweet spot was kids who had multiple foster care placements, had experienced homelessness at some point, and had a high school diploma or GED. We also learned that we needed to get better with youth who had low risk factor scores because they had an extensive support network, had never experienced homelessness, and were somewhat stable while in foster care. This may go against what we naturally think — that a person with extensive support would do better, but our data showed the opposite. We were also not very good at working with single parents who did not have a high school degree. In the coming year we are going to redo this process using algorithms to see if we get the same results and trends.
If we see that we are not doing well in an area, we research the best practices to deal with that area and direct resources and time to delivering that intervention. For example, because of the data we realized that a portion of our youth had very high trauma scores. Therefore we said we needed to become better at working with youth who have had complex trauma at high rates. We then created an initiative to insure that everyone in the organization understood trauma and its impact on our youth and the best ways to address it. We will see at the end of this year if this investment in trauma informed training has paid off by increasing our outcomes and impact with the youth that we serve.
We are consistently looking at the data to understand where we are doing well and where we need to improve. Its the data, the data, the data.
Photo Credit: First Place for Youth
Note: As I mentioned earlier, I am taking a few weeks away from the blog to relax and reconnect with the world outside of social change. But I am leaving you in the incredibly capable hands of a rockstar set of guest bloggers. Next up is Kelly Born, program officer at the Hewlett Foundation working on their Madison Initiative, which focuses on reducing today’s politically polarized environment. Kelly also writes for the always thoughtful Hewlett Foundation blog. Here is her guest post…
In March of 2014, the William and Flora Hewlett Foundation launched a new initiative focused on US democracy reform, The Madison Initiative. The overarching goal is to “help create the conditions in which Congress and its members can deliberate, negotiate, and compromise in ways that work for more Americans.”
Our mandate is for a 3-year, exploratory initiative to assess whether and how the Foundation might be able to make a difference here. During this period, we are focused on three central questions:
- Are there solutions and approaches that are worth pursuing?
- Is there ample grantee capacity to pursue these ideas (or can we help build it)?
- Are there funding partners we can work with to make it happen?
In exploring this problem of congressional dysfunction we realized early on that, unfortunately, there don’t appear to be any silver-bullets that will solve this problem – it’s not as if campaign finance reform, nonpartisan redistricting, or increased voter turnout, taken on their own, would resolve our current democratic ails (even setting aside for the moment how hard it would be to actually achieve these changes!).
Regrettably, there is no clear consensus on what to do to improve the system, much less on how to do it. This may be, in part, why Inside Philanthropy awarded The Madison Initiative with 2014’s Big Foundation Bet Most Likely to Fail. Given this, our view has been that current congressional dysfunction is occurring in a system of systems (and sub-systems) that are interacting in complicated ways.
Early on we decided to develop a systems map rather than a theory of change to guide our work (working in close partnership with the Center for Evaluation Innovation and Kumu, collaborations we’ve written a bit about here). Theories of change typically outline desired (social or environmental) outcomes and then map backwards, linearly, to the activities and inputs necessary to achieve those outcomes. Systems maps are perhaps better suited for more complex, uncertain environments like democracy reform, where cause-and-effect relationships can be entangled and mutually reinforcing, rather than unidirectional.
Version 1.0 of our map includes more than 35 variables we believe are contributing to the problem, distributed across three key domains: Congress, Campaigns and Elections, and Citizens. In light of this complexity, rather than making an initial set of big bets on a few key variables, we have instead spread a series of smaller bets within these systems to see where grantees might gain traction, and what this reveals about the system’s more confounding parts.
The benefits of this approach are many – in fact, I cannot imagine effectively tackling this particular problem any other way. But employing this spread betting approach also involves a few challenges for us at Hewlett, and for our partners and grantees. The trade-offs are worth considering:
- We are acknowledging and respecting complexity, but this can sow seeds of confusion for our partners. Our approach has the essential benefit of taking into account the systemic complexity and interdependency of what we are trying to help change. We are avoiding over-simplifying and thereby misconstruing our reality (a good thing). But we are exploring more than 35 variables (ranging from deteriorating bipartisan relationships to the proliferation of partisan news media), with more than 60 active grantees. This approach can be hard to manage, and harder still to convey to others – especially anyone accustomed to a more linear and readily understandable theory of change.
- Our course correcting helps us learn, but has a real impact on partners. As we diversify our investments to learn more about what works, we will continue to learn more about which efforts are having the most impact on congressional dysfunction, and which are less germane to the problem. As we do, we will necessarily converge (and double down) on a few core interventions, while discontinuing others. This will mean disappointing organizations that we respect and had supported at the outset – an inevitable byproduct of this approach, but unpleasant for all involved.
- Our evidence-based approach risks coming off as overly academic. We are determined to avoid investing in solutions where there is not solid evidence to support their viability vis-à-vis our goals. This helps us avoid squandering funds on interventions that won’t, ultimately, work. But this approach also runs the risk of coming across as standoffish, academic, and idiosyncratic in the eyes of a practitioner-driven field that in some instances may be pursuing work that is harder to (or has yet to be) substantiated by solid research.
We’ve certainly got our work cut out for us. But we deeply believe that the social sector shouldn’t shy away from complex problems. We also believe that the benefits of this approach far outweigh the costs. It enables broad-based learning, and truly forces us to constantly re-think the grants we are making. Building in these tough choices, rather than forging ahead with a pre-defined strategy, requires that we not just learn, but that we act on what we discover. And fast.
In short, while beset by a few real challenges, we’re convinced that an emergent path is the best path forward. Surely we will place some wrong bets along the way. But, as a favorite colleague of mine often says, “it’s not like we’re selling cigarettes to children.” All of our grantees are doing great work – ultimately it will (not so simply) be a question of which of these lines of work is most likely to improve Congress.
In 2017, we will go back to our Board of Directors to discuss whether and how The Madison Initiative’s work will continue. In the meantime, we would love to hear how other funders have approached emergent problems like this – and how nonprofits might advise that we manage these inherent challenges as we progress?
Note: As I mentioned earlier, I am taking a few weeks away from the blog to relax and reconnect with the world outside of social change. But I am leaving you in the incredibly capable hands of a rockstar set of guest bloggers. First up is David Henderson, Director of Analytics for Family Independence Initiative, a national nonprofit which leverages the power of information to illuminate and accelerate the initiative low-income families take to improve their lives. David also writes his own blog, Full Contact Philanthropy, which is amazing. Here is his guest post…
In early June I was invited to be on a data mining panel at the Stanford Social Innovation Review Data on Purpose conference. The conference was full of nonprofit executives interested in tapping the big data revolution for social good. Naturally, the panel moderator asked us panelist to weigh in on if, and how, data was changing the social sector. Characteristically, I turned a feel-good question into a critique of the state of analytics in the social sector, which I’ve written about elsewhere and will expand on here.
Data is not changing the social sector. I would argue it’s not changing the world either. While it is very likely that data is changing your world, I do not believe data is changing the world.
For all the talk about how data is revolutionizing the world and that software is eating everyone’s lunch, the fact is that for the over two billion people who have no lunch to eat (literally and figuratively), the impact of the data revolution is muted, if nonexistent all together. Changing the world indeed.
The corporate data revolution has largely been fueled by data exhaust. Data exhaust is comprised of the various digital breadcrumbs you and I leave all over the Internet but that we might not think about as data in a traditional sense. For example, companies like Facebook and Amazon don’t simply log data when you click “submit”, they track your every movement around the Internet, logging every click and clack, allowing unprecedented marketing optimization. All these additional metrics are data exhaust, as consumers are almost passively generating data marketers can capture and monetize for almost nothing.
On the social sector data conference circuit, countless data-wonk hopefuls mindlessly espouse all the incredible things nonprofits can do now that data acquisition costs have been driven almost to zero. This is nonsense, as the social sector has no such data exhaust analogue, which is why the social sector doesn’t truly have big data.
Nonprofits often work with populations with a number of barriers, which drives up the cost of data acquisition relative to for-profit counterparts. Just some of the data collection barriers nonprofits grapple with include working with populations with low levels of literacy or limited to no access to technology. How exactly is one going to generate digital exhaust without any digital possessions in the first place, or while working three jobs to support her family?
Obviously, you don’t. The barriers too many people face in this world are exactly why nonprofits are in the business of social change in the first place. But it is also why we are so poorly poised to capitalize on the alleged data ubiquity, as that revolution is not permeating class boundaries to the extent technology evangelists would have us believe.
Another reason why data is not changing the world, or rather, why the social sector is failing to change the world with data, is that by and large we simply are not investing in the necessary capacity to turn data into insights.
While a new “data for the social sector” company with an unfortunate misspelling of a common word seems to pop up every day, there are very few companies actually building the tools the sector needs to put data in to action. Meanwhile, our technological overlords in Silicon Valley are depressingly stuck on the assumption that innovation in the social sector means fundraising software. Sigh.
If we want to use data to change the world, we need to think beyond software tools and simple (if colorful) data visualizations. Nonprofits need to invest in building their own analytical capacity, both by hiring analysts and also by investing in the entire staff’s ability to be intelligent consumers of data analysis.
Illusion of Insight
Everyone loves the idea of being data driven, but very few organizations actually want to make the investment. My employer, the Family Independence Initiative (FII), did make that investment. In turn, FII is now able to not only run regressions and build decision tree models, but can continuously learn from its data, augmenting every level of the organization from Chief Executive to line staff.
That investment is not cheap. Worse yet, like any good analyst, I can be a major buzz-kill. Much of my time is spent explaining why a particular regression coefficient doesn’t necessarily mean we are super awesome. In fact, a good analyst can make you less sure of your social impact.
But facing the tough reality paves the way to real impact. We cannot collectively do more without exactingly quantifying how little we’ve accomplished. These are tough truths, and most nonprofits would rather assume the hypothesis of their greatness, leaving no room for data’s insights.
The Path Forward
Just because data is not changing the world does not mean data cannot change the world. I believe it can, which is why I do what I do. While by and large nonprofits fail to invest in rigorous analysis, organizations like GiveDirectly are leading by example, showing what is possible when fact is paramount to fundraising.
Ultimately, being data driven is less about statistical techniques and more about a relentless commitment to the truth. The truth is that data is not changing the world. But if we, as a sector, can elevate the truth above all else, then we might just be able to change the world after all.