The Stanford Social Innovation Review has a must-read article posted on their site. It looks at the issue of "accountability" and how funders are increasingly demanding reams of data to prove that the money they give to non-profits is well-spent. (ALERT--Little of this will surprise you, but most of it will depress you).
The article notes that while grounded in good intentions, there are a number of problems with the current "data-driven" approach:
- Every funder wants different data, presented on their forms and analyzed according to their requirements. For organizations that are working with a number of different funders, this can make data collection and analysis a full-time job. According to the article:
"Because funders have their own home-brewed definitions, methods, and
measures, many nonprofits spend a lot of time tailoring their reporting
to each funder’s tastes. 'Every single grantor we have has a different
evaluation tool or format or criteria they want us to use, and we
measure all of them,' says the executive director of a nonprofit in
Northern California who wanted to remain anonymous. 'Once you get it
down and can fill it out quickly and easily, it changes and they want
different information than they were asking for before. It takes time
away from what we actually get to do with people.'” (Sound familiar?)
- Funders generally don't want to fund the evaluation component. In most grants, the costs of collecting and analyzing data will either have to come from the administrative allotment (generally no more than 10% of the grant total and increasingly only 5%) or from some other source. At the same time, they want more data and more analysis, making the process even more expensive.
- Nobody can agree on the best methods and options for evaluation. Should you be measuring impact or gathering data to improve the process? Should you gather qualitative data or quantitative data? Should you evaluate individual programs or the entire organization? And since you get what you measure, this confusion can often create issues around the actual performance you get. We've previously discussed the fact that when you focus on outcomes, you often don't get the quality process you want. But if you focus only on process, how do you know if you achieved impact?
- "Summative" or impact evaluation is expensive and complex. The only way to TRULY measure whether or not your program or organization "made a difference" with your clientele is to have a control group that didn't receive your services and then track various data points for both groups. This is time-consuming, costly and most staff wouldn't have a clue how to set up such a process. It also raises ethical issues as many organizations would be very uncomfortable refusing services to a population just to have them in the control group.
- Social change works differently than profit. In the business world, profit is the easiest and quickest measure of "success." There is no such analog in the non-profit world; there is no single measure that cuts across all programs that would give us a good idea of how well the organization is operating. Further, as the article points out, social change works at "the speed of molasses. A program that looks like a failure at year 2 or 3 might be a raging success 20 years from now." But of course we often don't get a chance to find that out.
Probably the most disheartening take-away from the article is the experience of Rubicon Programs, which invested in a sophisticated data collection and management system so that they could provide funders with good data cut any way they wanted it.
Before Rubicon installed its data system, the organization faced a
struggle familiar to many nonprofits, says Rick Aubry, the
organization’s executive director. Each funder demanded different data,
reported on different forms. The result was that many staff spent large
chunks of their time generating the one-off data, while several
directors spent large chunks of their time filling in the one-off forms.
Despite all of the time and money that Rubicon invested in creating
these reports, they contributed little to improving the program’s
effectiveness. Funders seldom asked Rubicon to explore ways that it
could improve its services. Instead, they often wanted to know only how
Rubicon spent their money. These reports “added zero value to our
decision making, and did not help us improve our services,” says Aubry.
With its powerful new evaluation system in place, Rubicon can now
deliver data to its myriad funders in all kinds of permutations, with
time and resources left over to collect the numbers that it wants for
itself. Ironically, the system has uncovered a new problem: Most
funders don’t actually care about the data.
“Everyone says they want to be data-driven in their decision making.
But now we have all of this robust data, and it doesn’t seem to have
any effect on funders’ decisions,” says Aubry. “From the viewpoint of
financial sustainability, we are no better off than before.”
What really seems to drive funding--surprise!--is the whim of funders:
"Carolyn Roby has reached a similar conclusion from her perch as
the vice president of the Wells Fargo Foundation Minnesota: “Big
changes in funding strategy are not the result of unhappiness about the
impact of previous grantmaking. It’s just that someone gets a whim.
“Ten years ago in the Twin Cities, for example, employment was the
big issue,” says Roby. “Now, the sexy new thing is
ready-for-kindergarten programs. It’s not like our employment problems
have been solved, or that our employment programs were bad. It’s just
that pre-K is hot, hot, hot. It drives me nuts.”
The converse is also true – there are plenty of programs that are
not proven effective, but that still bask in the warm glow of federal
funding. DARE, which places police officers in classrooms to teach kids
about the hazards of drug use, and abstinence-only interventions for
teenage pregnancy have yet to show that they are better than – or even,
in some cases, as good as – other programs.8, 9 Yet DARE has been continuously funded for several decades, and abstinence-only programs show no signs of falling out of favor."
The article suggests that if possible, most organizations should ditch summative evaluation, as it's too difficult and complex to manage. Further, it discourages innovation and change:
“Innovations by their nature are adapting solutions to conditions of
uncertainty,” he notes. “And so when you want people to be innovative,
what you need is a process that allows you to try out things, alter
things, change things, get feedback, and adapt.” Summative evaluations
require the opposite of innovation: deliver the same program over and
over again, regardless of whether conditions change."
The article also suggests that funders should be partnering with other funders to consolidate evaluation requirements so that NPOs don't have to satisfy hundreds of different funder requests. They should also be partnering with NPOs to turn evaluation into opportunities for learning, rather occasions for judgment. The overall goal should be to create a culture of inquiry in which data is used to find successful models of change that can be used on the front-end to design good programs rather than on the back end to judge failure or success by, at best, dubious means.
For more information, you can download the article. You may also want to explore The Center For What Works, which is developing common evaluation frameworks for a wide variety of programs, as well as providing Benchmarking Toolkits to teach NPOs more effective evaluation processes.