Action research and evaluation on line
Session 11: The Snyder process
This is Session 11 of areol, action research and evaluation on line, a 14-week public course offered as a public service by Southern Cross University and the Southern Cross Institute of Action Research (SCIAR).
...in which an overview of the Snyder evaluation process is given, incorporating the three phases of process, outcome and short-cycle evaluation
Take a moment to consider -- how good is the feedback you receive on your performance? What sort of feedback would you find most useful? How might it be provided so that you would use it?... and then consider further -- how much do you know about what people think of you and what you do? What would you like to know about this? In what form would you prefer to receive it
In this session
- A systems model
- An overview of the process
- In partial summary
- Archived resources
- Other reading
This, the second of the sessions on evaluation, takes up a particular evaluation process: the Snyder process. 1 I begin by describing the systems model which underlies parts of the process. I then give an overview of the three-phase process itself.
I should mention that this is by no means the only approach to evaluation. Various other approaches, especially those that are both qualitative and participative, are likely to resemble action research. Many action research processes can be pressed into service for evaluation purposes. In addition, there are many approaches beyond those that resemble action research.
I choose this approach because I use it often. Also, it lends itself to participative approaches, which I prefer. It includes a variety of approaches within itself: this can be a benefit. It can be integrated easily and effectively with strategic planning, and may even rescue that often-misused process from its less intelligent applications. Most of this will become more apparent as we proceed.
Above all, it's an approach for practitioners. I've had good results using it participatively to bring about change. The group being evaluated usually begin to change their behaviour by the third or fourth step into the first phase.
I invite you to notice the way it builds understanding as it proceeds. This, I think, is an important reason for its effectiveness. As people come to understand how they achieve their objectives they become more effective at achieving their objectives.
As a bonus, you'll find it easy enough to learn if you don't get lost in the detail.
A systems model
The Snyder process uses systems concepts. It assumes that the organisation or unit or project or program being evaluated can be viewed as a "system". Systems models treat a program (or whatever) as something which transforms inputs into outputs. Resources into achievements.
By monitoring the achievements you provide yourself with feedback. The feedback allows you to make better choices: about inputs to use; about activities; about outputs to pursue.
The Snyder process uses three levels of output. They take place over different time spans. The five elements are:resources -> activities -> effects -> targets -> ideals | | | | | inputs | processes | outputs |
In more detail:
resources These are the inputs. They consist of anything (time, money, materials, etc.) consumed by the activities. (They also include anything, like skills, which are not "consumed" but are required and are not plentiful.)
activities These are the processes. They are the activities and operations carried out as part of the program (or whatever it is being evaluated). They include what people do, day by day and moment by moment.
effects (An abbreviation for "immediate effects".) These are the outcomes which result as the activities are carried out: the immediate results of the activities. There are intended and unintended effects.
targets (Sometimes called objectives.) These are the identified outcomes which the program pursues. (In the corporate literature they go by a variety of names: goals, objectives, and the like.)
ideals (Often called "vision".) This is the "better world" to which the activities are presumed to contribute in the long run. The ideals are future, and general. They are more something to aim for rather than something that will actually be achieved.
For convenience, you can think of targets as being tied to the planning cycle of the program. This is often a year. For some purposes it may be more or less than this.
In comparison the immediate effects are short term. The ideals are very long term, and probably unachievable: the vision which guides and motivates the program.
The diagram provides a partial summary.
Why don't you try it out? Choose some important activity you engage in. Identify the other components, from resources through to ideals
Or choose your present activity. You're reading this, perhaps on a computer screen. What are the resources and immediate effects? What is your best guess about the targets I think you might have achieved, and the vision of a better world which encourages me to provide this material?
This systems model is a categorisation. It helps to manage the data collected.. It also provides a framework. It underlies each stage of the process, which is now described.
The Snyder process sets up feedback loops. These allow stakeholders to better understand and manage their activities and resource use. The stakeholders are thus better able to achieve worthwhile and desired goals.
An overview of the process
The Snyder process is designed as three phases. Each addresses a separate form of evaluation. Each builds on the preceding phase or phases.
Please note: In this and subsequent sessions, I address some important issues only very superficially. Entry, contracting, identifying stakeholders, and building relationships are given only a cursory treatment. However, these are at least as important as in other action research processes. Given the scepticism about evaluation in many quarters, they may well be more important. I assume you will fill in the gaps from earlier sessions.
In addition, the process as described below doesn't pay explicit attention to what the program or unit environment is doing. You may therefore want to include, as part of the "ideals", an examination of the changes going on in the relevant environment. (Many vision-seeking exercises include this, and may be pressed into service.)
In the following descriptions, I assume a participative approach to evaluation. The "evaluator" acts more as a process consultant, guiding the process used for evaluation. The actual evaluation is done by the participants.
This is done on the grounds that it is usually participants who have to turn the evaluation into change. Their understanding and commitment matter more than that of the "evaluator".
On occasion, you may well have reason to do an evaluation non-participatively. If so, the same overall process can be used.
On yet other occasions you may be working with participants who are, or can become, sufficiently skilled in the processes. You can then usefully involve them as co-researchers.
Each of the three phases of the Snyder process is a different form of evaluation seeking to answer different questions. The phases are described below.
A. The process evaluation component 2
The goal is for the stakeholders to understand the process by which they achieve what they achieve. Unintended achievements, good or bad, are important.
The questions it seeks to answer are to do with the way the program or unit operates. What resources are consumed by what activities? Are these the activities which appear to contribute most to targets and vision? What unintended effects do they have?
The process evaluation tries to answer these questions by addressing the links between the elements. It operates by trying to identify:
- which activities consume which resources;
- which activities produce which immediate effects, intended and unintended;
- which immediate effects contribute to which future targets; which immediate effects hinder achieving those targets;
- which targets are likely to contribute to which ideals.
It does this, in general, by identifying and comparing adjacent elements -- for example, ideals and targets. It then analyses which targets and ideals are associated. Mismatches between targets and ideals then become the catalyst for changing targets, ideals, or both.
It is possible to do only a process evaluation, and then switch to action planning. However, I think the Snyder process is most valuable when it also includes all three phases.
In short, the process evaluation phase seeks to understand how the unit or program operates. It does this by examining the links between the elements:resources -> activities -> effects -> targets -> ideals ^ ^ ^ ^B. The outcome evaluation component
The goal of this phase is to develop performance indicators. These allow the achievements of the program or unit to be monitored.
Outcome evaluation often seeks to answer the question: Is this program achieving its goals? Or, Is this program better than program X?
My personal view is that these questions are seldom answerable. Therefore, in the Snyder process, this phase seeks instead to determine how performance can be monitored. The performance indicators developed in this phase can then be used to set up feedback loops.
In general, the Snyder process does this by finding present indicators of future targets and ideals. Ideals are not evaluable in any real sense. Targets can be evaluated when you get there; but that may be too late. Indicators are here in the present.
The process for identifying indicators is simple enough. You start with the ideals, and follow them back until you find something present that you can evaluate.
To anticipate the topic of a following session... The criteria I will be suggesting for indicators is that they are an adequate sample of:
- the ideals,
- resource use,
- intended immediate effects, and
- unintended immediate effects.
As far as possible, I suggest you don't use performance indicators of activities. That assumes that there is one right way to do something. It may well be that different people can achieve outcomes in different ways.
Activities are often easily measured. Unfortunately, the activities which work best for some people may not be the best way for others to achieve the outcomes. Activity-based indicators may constrain people, and ignore individual differences.
In short, the outcome evaluation phase builds on the understanding achieved from the process evaluation phase. It uses this understanding to identify performance indicators which can serve as proxies for future targets and ideals:
resources <-- activities <-- effects <-- targets <-- ideals
--------- - - - - - - -------
C. The short-cycle evaluation component
The goal of this component is to create a self-improving system. By helping stakeholders set up feedback loops which indicate ideals, it makes it easier for them to monitor their performance and steer the program or unit towards the ideals.
Short-cycle evaluation seeks to answer the questions, on an ongoing basis: How are we doing? And what could we be doing differently?
It does this by setting up feedback loops which provide regular and relevant information on performance. This feedback is given directly to those whose performance is being indicated. They control it, and are encouraged to modify it if it doesn't work.
In summary, the short-cycle component builds on both process and outcomes components. It sets up feedback loops using performance indicators that can be used to guide activities.
I said there were three phases. And there are. Each addresses a different form of evaluation, threading them together to produce better understanding and better possibilities for monitoring and improvement.
There is, in addition, an important fourth component.
D. The meta-evaluation component
It isn't a fourth phase -- rather, it accompanies and informs all three phases. Usually, it is called meta-evaluation. It is provided by carrying out the whole process within an action research framework of:intend --> act --> review
In other words, each step is preceded (the "intend"), accompanied (the "act") and followed (the "review") by critical reflection.
In addition, the indicators include some which monitor the ongoing short-cycle evaluation itself. From time to time after the evaluation is completed, the program team or unit is encouraged to review the extent to which:
- their ideals and targets are still relevant;
- their indicators are working effectively; and
- they are making effective use of the feedback they receive.
In partial summary
This session has presented an overview of the content model which guides the Snyder process, and the three phases of the Snyder process. The content model identifies five elements which are threaded together:resources --> activities --> effects --> targets --> ideals
The three phases are:
- process evaluation, which seeks to understand the links between these elements;
- outcome evaluation, which uses this understanding to identify performance indicators;
- short-cycle evaluation, which uses these indicators to set up feedback to monitor ongoing performance.
All of this is accompanied, before and after the evaluation, by regular critical review in the style of action research.
- I learned the essentials of this process from a former colleague, Wes Snyder, hence its name. It has been much modified over the years, but I think he still approves of its essential nature. [ back ]
- There are difficulties with terminology. Some writers equate process evaluation with formative evaluation. Others make a distinction. In any event, "formative evaluation" is the term most often used. I persist with the term "process evaluation" because to my mind, that is what it is. It seeks to help stakeholders understand the process by which they achieve (or don't achieve) what they set out to achieve.
Similarly, some writers equate outcome evaluation and summative evaluation. Some don't. I use the term outcome evaluation because it seeks to monitor outcomes, to the extent that that's possible. [ back ]
The archived file qualeval has already been mentioned. It uses the Snyder process to illustrate some issues related to qualitative research and evaluation.
Two files provide descriptions of the Snyder process: snyder is a fairly detailed description, with a brief rationale given for each of the major steps. The file snyder-b is a briefer description of the same process.
The URLs are:http://www.uq.net.au/action_research/arp/snyder.html ftp://ftp.scu.edu.au/www/arr/snyder.txt http://www.uq.net.au/action_research/arp/snyder-b.html ftp://ftp.scu.edu.au/www/arr/snyder-b.txt
Check the bibliographies mentioned in the previous session. The entries in the file biblio are mostly annotated, helping you choose something relevant to your interests.
Apart from the archived files mentioned above, there are no easily-accessed works specifically on the Snyder process. Useful and readable overviews of qualitative evaluation generally are provided by Michael Patton's work. His best known work is probably the following:Patton, M.Q. (1997) Utilisation-focussed evaluation, 3rd edition. Beverly Hills, Ca.: Sage.
There is a review article which provides an interesting account of the changes in evaluation practice to meed the demands of a complex world:Cook, T.D. and Shadish, W.R. (1986) Program evaluation: the worldly science. Annual Review of Psychology, 37, 193-232.
There are reflections in the Snyder process of a number of other aspects of current organisational practice. There are similarities to strategic planning:Kaufman, R., and Herman, J. (1991) Strategic planning in education: rethinking, restructuring, revitalising. Lancaster, Pa.: Technomic.
Kaufman uses a content model which, despite its different labels, is very similar to the five-element Snyder model. He uses the labels inputs (raw materials), processes ("how-to-do-its"), products (en-route results), outputs (end-product deliverables) and outcomes (the effects of the outputs).
Short-cycle evaluation allows a program or unit to become a self-improving system. This has similarities to the notion of continuous improvement (in Japanese, "kaizen") in Total Quality Management:Imai, M. (1986) Kaizen: the key to Japan's competitive success, New York: McGraw-Hill.
and to the notion of a learning organisation:Senge, P. (1990) The fifth discipline: the art and practice of the learning organisation. New York: Doubleday.
(One is led to suspect that the best organisations and programs have often behaved in certain ways. These ways are re-identified and re-labelled from time to time to create a new fashion.)
ActivitiesA thought experiment
On the archive (and emailed separately if you are doing this as a one-semester program) is a file darts. It invites you to imagine a game of darts. That's the suggested thought experiment for this session -- to imagine playing dart blindfolded. Then compare your work to that experience.
The "darts" document can be found athttp://www.uq.net.au/action_research/arp/darts.html and ftp://ftp.scu.edu.au/www/arr/darts.txt
An individual activity
If you memorise two aspects of the Snyder process, you give yourself a tool you can use to understand how and why you do what you do. First, the content model:resources --> activities --> effects --> targets --> ideals
Second, the three-phase overview of the process:
- process evaluation: understanding the links between the elements
- performance indicators which sample resource use, and intended and unintended immediate effects
- developing ways of providing feedback on those indicators.
I suggest you compile a diary at the end of a typical day, listing all of the activities you took part in. Then use the Snyder content model and process to think about those activities, their costs, and their effects.
For your learning group
Over the next several sessions you will find your learning group activities more helpful, I think, if you have something you can use as an evaluation project.
In these sessions, I'll invite you to imagine carrying out the relevant parts of a Snyder evaluation with the stakeholders in that project. This will enable you to check that you can turn the step-by-step "recipes" into actual behaviour.
I suggest you use a group session to help each other choose some program or unit you know well enough to evaluate it. This can be a vehicle for these imaginary evaluations.
Let's practice action research on areol. What ideas do you have for improving this session? What didn't you understand? What examples and resources can you provide from your own experience? What else? If you are subscribed to the email version, send your comments to the discussion list. Otherwise, send them to Bob Dick
This session has presented an overview of the Snyder process. The next session will develop it further by examining some pieces of it in a little more detail. See you then. -- Bob
Copyright © Bob Dick 2002. May be copied provided it is not included in material sold at a profit, and this and the following notice are shown.
This document may be cited as follows:
Dick, B. (2002) The Snyder evaluation process. Session 11 of Areol - action research and evaluation on line.
Maintained by Bob Dick; this version 12.02w; last revised 20020712
version of this file is available at