Evaluation as Collaborative Inquiry

Craig Dykstra, Vice President for Religion

This article comes from Initiatives in Religion, a publication of the Lilly Endowment, Inc. Each year the Endowment makes millions of dollars of grants to religion. In this article, the head of the Religion Division, Craig Dykstra, explains a more productive understanding of the notion of evaluation. He introduces the article with an explanation of his immediate interests and then proceeds to explain this idea that evaluation can be collaborative inquiry.


Before any grant proposal is sent by Religion Division staff to the Officers and Board with a request for approval, potential grantees are asked the following question: "What are your plans for evaluation in this project?"

Usually, little advance thought has been given to this matter. Frequently, our question prompts only a cursory answer, such as: "The results of this research will be published as a book and be evaluated by the scholarly community through reviews," or "We will hand out questionnaires at the conclusion of the conference to solicit the reactions of participants." To be honest about the matter, we program officers too often let such answers slip by and do little more to encourage further thought about the evaluation issue. Thatís a mistake on our part, because we miss important opportunities. The more attention we can all pay to evaluation, and to planning for it at the outset, the more likely it is both that the various protects we fund will bear their potential good fruits, and that the various projects will be related to one another in mutually supportive and stimulating ways.

Grades and scores

The word "evaluation" has bad connotations. It is an anxiety-producing term that often lands us back in a geometry classroom or conjures up memories of the day we received our SAT results. "Evaluation" unavoidably connotes, at some primordial level, grades and scores. We are put on a scale and compared to others. And whether we received good grades or poor ones or somewhere in between, we all want to avoid as much of that kind of thing as we can. It is not just anxiety about how we measure up that makes us reluctant to pursue evaluation, however. Also involved is something to do with the reduction of our sense of ourselves and the meaning of our efforts and productions that is intrinsic to measurement according to some scale. "C", or "B+" or even "A" simply cannot capture the significance of a paper I have invested myself in crafting. We all at least implicitly demand a fuller, more human response, even if we never ask for it out loud or cannot seem to get one when we do.

Why donít we ask? Perhaps because we know that the very idea of evaluation on a measured scale is done for other purposes than our own development. The purpose of that kind of evaluation is not deepening self-knowledge in the midst of oneís activities.

Its purpose is classification, sorting. The agent of such evaluation is not the self, but others who will use the results for their own purposes. We donít ask for a full response, then, because we know that the kind of response we need and want is not available in such circumstances. We recognize that we are not (and are not supposed to be) active agents in this kind of evaluation, but rather only objects of it.

If this is what "evaluation" means to us (if this is the only kind of experience the word "evaluation" connotes for us), our resistance to it is no surprise. Nor is it any wonder that we would have difficulty answering the question, "What are your plans for evaluation in this project?" If we do not see ourselves as the responsible agents in the evaluation of our own work, but only objects of it, then planning such evaluation must be someone elseís responsibility to do. (The foundationís, perhaps?)

Coaches and editors

I enjoy playing golf from time to time. A golfer-friend suggested I read Harvey Penickís Little Red Book. Subtitled "Lessons and Teachings from a Lifetime in Golf," the book is a published version of the little red notebook that one of the finest teachers of the game kept over the span of about 60 years. Its contents are time-tested hints about how to do things right on a golf course, how to practice the game so you can improve, and some observations on the joys and satisfactions of playing. Itís a fine and enjoyable book. You can tell from reading it that Harvey Penick is a great teacher of golf. In fact, you can tell from reading it what great teaching consists of. And you can see why some of this eraís best golfers (Tom Kite, Kathy Whitworth, Betsy Rawls, and Ben Crenshaw) would return to Austin, Texas to consult with Penick over and over again whenever they wanted some really good evaluation of their games.

Superior athletes constantly evaluate their own performance and search for ways to improve it. That is why they easily recognize their need for good coaches/teachers/evaluators: other people who can help them see and feel what they are doing, people who can help them understand whatís going on and figure out how to do it better. The interesting thing about the really good athletes is that they regularly seek such help. They go get it. They ask for it. They even pay for it. (No wonder! They can afford it, right? But the stakes are high for them, so they also know they cannot afford not to.)

Artists (musicians, writers, filmmakers, dancers) also seek such evaluation. Whether the evaluators are called teachers, editors, or coaches, they make it possible for artistic creators to pull out of themselves the highest craft and skill they can muster. Somehow, it seems, the art of evaluation is an essential ingredient in the human activity of creation.

Religion Division evaluation program

Maybe we at the Endowment would generate a better response if we would change our question about evaluation plans to something like, "What plans have you made for building in self-reflection on what you have learned and for getting good coaching as you conduct your project?"

Getting good coaching/evaluation has been important to the Endowment itself for a long time. D. Susan Wisely has ably directed the Endowmentís evaluation efforts for 20 years. All that time, she has been helping the staff and many of our grantees to be active agents in garnering the best possible help they can find to reflect on their work, so as to improve its quality and impact. Three years ago, the Religion Division launched a still more systematic evaluation program through a grant to Christian Theological Seminary. The programís project director, Carol Johnston, an ordained Presbyterian minister and a theologian who did her doctoral work at Claremont is providing additional much-needed help to Religion Division staff and a wide variety of grantees as they plan and implement several kinds of evaluation.

Some evaluations currently underway are programmatic evaluations. In these, the Endowment supports the efforts of coordinated teams of thoughtful people who explore a cluster of grants within a program area; work with project directors of those grants to identify key learnings; relate them to one another; assess the cumulative whole in terms of its significance and impact; and ask what clues may be found in these projects (and in whatís missing) that can prompt future work. Such evaluation teams are coaches primarily to the Religion Division. Their help is directed to assessing the Divisionís past and current grantmaking aims, policies and activities. Through such evaluations, we try to learn from what we and our grantees together have accomplished and to see better how to build on that through future funding

Other evaluations now going on have a more limited focus. Concerned with specific projects, many of these provide grantees with mid-course guidance. In most cases, such evaluations are planned from the beginning of a project. (It is usually too late to plan and implement concurrentóor formativeóevaluations after a grant is made and a two- or

three-year project is underway.) Grantees in most such cases request funds for evaluation as part of their proposals, identify evaluator/coaches early on, and build mutually agreed upon evaluation designs. From the beginning, they engage in systematic collection of data that evaluators need, so that at the appropriate times they are ready to engage in the evaluation activities they desire (observations, meetings, consultations, reports, etc.).

Still other evaluations are done at the conclusion of a specific project. In these cases, the grantee and/or the Endowment are eager to synthesize carefully what has been learned in a particular effort, so that the insights generated can be of benefit to work already going on elsewhere or that is being considered for the future. While such evaluation work is usually harder to do (because everything has to be done at once and some information that would have been helpful is by then irretrievable), it is sometimes worthwhile. Important questions emerge at the end of some projects that could not have been anticipated; evaluations designed to assess impact are in some cases best planned and carried out when a project is near completion or even later.

Evaluation as purposeful inquiry

As you can see from these examples, evaluation (to our way of thinking, in any case) has little to do with grading, scoring, and classifying and everything to do with gaining better insight into what one is doing and with finding ways to improve it or extend it in worthwhile directions. Each of us can (and should) do a lot of this sort of evaluation ourselves as a regular part of everyday self-critical reflection in our own work. Reflective practitioners of all kinds constantly build self-evaluation into the very warp and woof of their endeavors. But there are limits to what we by ourselves can see and figure out. Like the best athletes and artists, we all need coaches and teachers who give us honest assessments (including those that make us uncomfortable) and helpful suggestions.

The best way to get help of this kind is to take initiative in garnering it. To be active agents in (rather than passive objects of) evaluation involves

(1) building into your work regularly-scheduled time for reflection on what you are learning;

(2) discerning when you want and need the skillful help of others; (3) thinking through as best you can what kind of help you need;

(4) finding people you trust to give you what you need in a way that you can use it; and

(5) making yourself available and open to receive the help youíve asked for.

Evaluation understood this way is a form of collaborative inquiry. It is purposeful inquiry into the structure, processes, and substance of oneís own work. Real pleasures can be had in such inquiry. There is the pleasure that comes from more fully finding oneís way. There is the pleasure that comes from discovering how to do things better. And there is the pleasure that comes from broadening the company of people who know and care about what you are doing. (Indeed, such inquiry often surfaces and generates new partners in the work you are doing.)

The Religion Division has gained enormously from evaluations we have commissioned. We want to support you in your own such inquiries as well.