Question 3: Not surprisingly, some methodological choices prompt much discussion and debate among evaluators. For example, Mark and Henry (2006) argue that policymakers are most interested in and most likely to use evaluation results that establish the causal connections between a program or policy and an outcome. Others argue that policymakers don’t really use such results, that they’re too busy with political pressures and have their own opinions about what works. These evaluators argue that evaluation is most likely to be used at the program level where program managers and staff are often more interested in descriptive information. Which type of designs (causal or descriptive) do you think are more likely to be used? By whom? Why?
Before getting at the causal versus descriptive design focus, I want to address the level at which evaluations are most likely to be used. The question states: “evaluation is most likely to be used at the program level….” I believe that change is most readily made at the local levels. My parents, especially my mother, are extremely involved in their local community (Corvallis, OR). They are critical players in various organizations there, including the Corvallis-Uzzhorod Sister City Association and the Benton County Historical Society. A couple of past stake presidents (local leaders in the Mormon community) have told my mom she should run for mayor (she’s declined, saying she knows better than they do want being mayor entails, and doesn’t want to take that on!). Because of the influence my mom has had locally, I think that she is the most powerful woman I know. Perhaps Michelle Obama is affecting more people. But my mother’s influence is direct and vital. From my parents, I’ve learned that you can make a difference in the world, but you will be most successful in making that difference if you work within your local community.
With my preference for change on the local level, do I necessarily think descriptive designs are best? I don’t think the one implies the other (although they are connected in the book’s question). Nevertheless, I do think descriptive focuses are really helpful, especially in formative evaluations. I can see why managers of the projects themselves, operating at the ground or local level, would want description, so that they can know even more about their program.
Yet it also makes sense why others would be interested in casual designs. It does seem that these are the designs policymakers care about. They don’t have time (or interest) in reading detailed descriptions about specific projects. But if they can learn about causes, they may be able to set policies that affect those causal interactions.
Even so, my pragmatic side agrees that “policymakers…[are] too busy with political pressures and have their own opinions about what works.” I really would like to know: how frequently do politicians and policymakers change their opinions because of causal factors indicated by an evaluation? And how often do they just go with what their constituencies or own preconceptions dictate?
Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines. 4th edition. New Jersey: Pearson Education.