Action research and evaluation on line

Session 7: Data collection



This is Session 7 of areol, action research and evaluation on line, a 14-week public course offered as a public service by Southern Cross University and the Southern Cross Institute of Action Research (SCIAR). which three styles of process are described, and delphi is used as a process to illustrate some principles of combined data collection and interpretation


Disagreements may lead the way to a deeper understanding.  But if people respond to disagreement with defensiveness, it leads instead to a distortion of information. As you read this session, I invite you to consider what needs to be done so that disagreement does lead to a pursuit of the truth, not a pursuit of victory

(yes, I realise the concept of "truth" is problematic)

(for that matter, so is the concept of "victory")


In this session

This is the first of a number of sessions which examine data collection and interpretation.  It follows on from the previous session, which focuses on action research as research.  To this, it adds some concern for action research as action.

In this session I'll describe some ways in which you can collect and interpret data within an action research cycle.  I'll also begin to address what you can do to ensure the quality of your data and the appropriateness of your interpretations.

This session will analyse delphi as a data-collection process. Following sessions will illustrate the approach by describing some specific methods you can use to collect and interpret data.

And let me say once again:  this is not the way to do action research;  it is one way.


(Three contextual comments:

(First, I've chosen delphi for this analysis because I think it illustrates some of the key points well.  I'm not offering it as the preferred method for data collection and interpretation, though it does this well for some purposes.

(Second, throughout areol I suggest integrating data collection and interpretation within each cycle, including within the small cycles within larger cycles.  This may be more a practitioner view than an academic one, though it can serve academic purposes, especially by increasing rigour.  As elsewhere, I assume you won't take this as "truth" but will make up your own mind.

(Third, one of the strengths of action research is that it allows you to bring rigour to highly participative processes.  While I have not given much attention in this session to issues of participation, you are invited to think about it for yourself.  For instance, how could you use delphi in a highly participative way?)

Think back to earlier sessions.  You will recall I suggested that within each cycle you seek out multiple data sources.  I also suggested combining data collection and interpretation.

Whenever you have two sources of data or interpretation you can use a dialectic process to refine your understanding.


Styles of process

First, some brief definitions:

By adversarial processes I mean processes in which it is assumed that one person's gain is another person's loss.  Such processes are commonly called "win/lose".

A debate is an example of an adversarial process.  Each debater tries to argue for a point of view.  It is usual for one debate to be chosen as the "best" or the "winner".

I treat compromise (as usually practised) as a subset of this: partial win / partial win.  "I'll let you win this piece if you let me win that piece."

By consensual processes I mean processes which first identify agreement, and then build on that agreement.  They are win/win processes which can give simple and effective decisions if the "wins" are easily enough identified.

Some visioning or ideal-seeking exercises, where people are asked to develop a shared vision, are consensual processes.  If the vision is set far enough in the future there is usually quite high agreement, especially if the vision isn't too specific.  People are then willing to devote effort to achieving some of that vision.

By dialectical processes I mean processes which craft agreement out of disagreement.

Dialectical processes are win/win processes.  But the wins are achieved only after the disagreements have been identified and resolved.  The disagreements often play an important role in identifying misunderstandings.

The goals of dialectical processes are information exchange and understanding.  People improve their understanding when they engage vigorously with the issues.  People educate each other within a process of cooperative enquiry.

In other words, dialectical processes are processes of mutual education.  They are more easily described than achieved.  A climate of mutual education is achieved only when people are willing to respect each other, and try to understand each other.  Some of the alternative dispute resolution processes for conflict management are dialectic.

Let me add that I don't really believe it is as black and white as these brief definitions suggest.  Most processes are at least a bit of everything.  The boundaries are fuzzy.  But it's a useful set of labels for talking about data collection and interpretation.  The type of process you use can effect both change and understanding.

Now, in more detail ... 


Adversarial processes

It is difficult for adversarial processes to serve either action or research.

They tend to hinder research.  If adversarial processes are used, the aim is to win.  People are likely to tell selective truths, or perhaps even plausible lies.  In the absence of accurate and complete information it is harder to gain a good understanding.  It is therefore also harder to make effective decisions.

Adversarial processes may also hinder action.  The action is probably based on biased information.  There are losers as well as winners.  The losers are not likely to be highly committed to the decisions taken.

(Losers may go along with the decisions, especially in a culture like mine which depends heavily on adversarial processes.  In this culture we assume that being allowed to decide binds us to the decision.  But as losers we won't exactly be distressed if the plans don't work. Some of us may even throw a spanner into the works while no-one is looking.)


Consensual processes

Consensual processes work best when there is already agreement.  This is especially when it is not fully recognised.  It is then that its emergence has pleasant surprise value.

These processes can be used to identify and record the unexpected agreement (unexpected by the participants, that is).  The agreement can then become the foundation for further planning or decision-making.

When this prior agreement exists but is unrecognised, consensual processes may be an efficient way of surfacing it. Their most common application is to define some future vision or ideals.

The vision then gives people a common purpose.  It also serves as a criterion which can help in choosing between detailed options. Especially when previously unrecognised it can act as a spur to collective action.

When consensus works, it is an easy and efficient way to generate decisions.  If agreement with the decisions is high, so is commitment likely to be.  Consensus helps action; it may also help research.

(If consensus is superficial, though, the information and decisions are likely to be superficial too.)  If consensus is highly valued but there are hidden agendas, a superficial or false consensus is quite likely.

Consensual processes are most effective when there is at least tacit agreement about those issues which are most salient. 

Under some circumstances, consensual processes can be counter-productive.  If there are disagreements which are important, people may nevertheless be unwilling to raise them for fear of undermining the consensus, even though it is really a false consensus.  This is a particular risk when consensus is highly valued, relationships are close, and conformity is high.


Dialectic processes

Dialectic processes generate agreement from disagreement.  They do this by pursuing three goals:

  • honest information, directly communicated
  •  striving to understand what others say
  •  using disagreement to identify where more information is needed.

In short, dialectic processes combine some of the features of the other two types of process.  As with adversarial methods, disagreement is likely to be evident.  As with consensual methods, the intention is to reach a mutually agreeable outcome.

At this point, an examination of a conventional mail delphi will illustrate dialectic and its main features.  We can then return to a summary of the potential of dialectic for action and research.



Delphi is most often used as a forecasting technique.  It can be used to create shared judgment and understanding among a panel of experts.

An effective and typical mail delphi might proceed through steps something like these:

  1. A researcher decides on a research question which cannot easily be answered because the relevant information is widely scattered.
    For example: "When will conversational voice recognition be built into most personal computers?" (Voice recognition has improved in recent years.  I doubt that it can yet truly be called "conversational", though it is getting close.) 

  2. The researcher assembles a panel of experts from the fields which are relevant to the research question.
    A panel for our illustrative research question would probably include, among others: computer hardware experts, computer software experts, artificial intelligence researchers, linguists, phoneticians, and so on.
  3. The panel members are briefed on the purpose of the exercise.  They are asked to prepare their answer to the question, and forward it to the researcher.
    (Delphi often uses numerical estimates.  That makes it easier for the researcher to collate and communicate the results, especially in the early rounds.)
  4. The researcher collates the results, and mails them out to panel members.
    For example, on this first round it would be enough for the researcher to report some measure of the average (probably the median in this instance), and some measure of the spread (probably the range or interquartile range). 

  5. On the second round, the panel members are asked to adjust their estimate towards the average, or provide reasons to support their estimate.  Again, they send this material to the researcher.
    This material is distributed, usually anonymously, to all panel members.
  6. This can continue until agreement is reached.  More often, a pre-set number of rounds are held, usually three.  (I think it makes more sense to continue until agreement is reached.  But it is easier to write a funding proposal for a set number of rounds.)

Imagine yourself taking part as a member of such a panel of experts. You've been chosen because you know your own field very well.  This field is relevant to the research question.  However, it's unlikely that you have a deep familiarity with all of the relevant fields.

Your initial estimate of course is based on the information you have. That is, it will weight highly the evidence you know best.  Other panel members have different expertise.  It's natural, then, that they will reach different estimates.

For example, if you are a computer person who knows little of linguistics, you will probably give an optimistic estimate. Hardware and software are developing very rapidly.  Computers today can manage easily tasks that would have been beyond much larger computers only a decade ago.  With some training for the software to recognise your voice, current software already does quite well.

If you are a linguist you know how problematic speech recognition is.  For instance, you understand that the differences from person to person are greater than non-experts recognise.  You know the extent to which people take context into account in deciphering speech. This is something people do far better than computers.  So your estimate is more pessimistic.

When you discover that many of the panel members have come up with very different estimates, you may be motivated to:

  • present the evidence you have which supports your estimate
  • find out why other panel members gave such different estimates.

The result is that you educate the other panelists.  You identify, from your field of expertise, the most relevant information.  This is sent to them.

In other words, you provide selective information.  As it is the information which supports your position, it may well be information which others do not know.  At the same time, you receive from them a lot of information.  Some of it is most probably new for you.  They educate you.

The usual outcome is that the estimates converge over time.  They move towards agreement from round to round.  In this way delphi has provided mutual education.  As more information is shared, the panelists move towards agreement. 

Now think of this in terms of data and interpretation.

Delphi begins by asking panelists to provide interpretations rather than data.  It then uses differences in these interpretations to identify relevant data.  After the exchange of data, the interpretations are revised.

Over time, the data tend to become more specific.  The interpretations tend to become more inclusive of all the information.  Neat, I think. 


Delphi as action research

Done by mail, delphi is a process very different to normal conversation or debate.  The panelists don't meet.  Usually they are not even identified.  And often they have little say in determining the research question.  However, none of these are necessary features.

As research, delphi offers many advantages.  If the panel members are well chosen, their later decisions are based on much better information than their early decisions.

I would also expect that, if the panelists were interested in taking action on their decisions, they would be well motivated to do so.

A word, too, about relationships.  Delphi is mostly done by mail, and often anonymously.  The task of managing the interactions is thus much easier.  If you run delphi in some other way, such as face-to-face, you need a more carefully-managed process and substantially better facilitation skills.

This was intended primarily as an example.  Notice, though, the issues faced by the researcher...  How are the panelists to be chosen? What question will they be asked? How will the information be collated and distributed?

These are questions of relevance to the action researcher.

Notice, too: the cyclic nature; the convergence towards agreement; the interaction of data and interpretation; the use of disagreement to lead participants deeper into data and interpretations.



  1. The concept of false consensus was popularised under the title of "groupthink" by Irving Janis.  See Janis, I.  (1972) Victims of groupthink: a psychological study of foreign policy decisions and fiascos. Boston: Houghton-Mifflin. back ]
  2. I should explain here that delphi was popular in the 60's, and then fell into disuse in the late 70s.  The reason was probably a vigorous critique: Sackman, H.  (1975) Delphi critique: expert opinion, forecasting and group process.  Lexington, Mass.: Heath.  There were rebuttals, for example by P. Goldschmidt (1975) 'Scientific inquiry or political critique? Remarks on Delphi critique, expert opinion, forecasting and group process by H.  Sackman', Technological Forecasting and Social Change, 7, 195-213.  Sackman's criticism was probably justified by the careless use of delphi rather than by delphi itself.  But it had the unfortunate effect that the technique itself fell into disrepute, in my opinion undeserved. It's pleasing to see that interest in delphi appears to be rekindling.  back ]
  3. I have a copy of ViaVoice, an IBM program now available for the Macintosh which I use. I'm pleasantly surprised at how much of my dictation it gets right (I've studied enough psychophonology to have some understanding of the difficulty of the task). I wouldn't yet describe it as fully conversational.  back ]
  4. It isn't necessary to have a good understanding of what a median or a range is.  The important point is that you want to give panel members some idea of the average estimate, and some idea of how much spread there is.  (For the curious: median and range -- or, better, interquartile range -- are chosen instead of mean and standard deviation for good reason.  The distribution of responses is likely to be skewed rather than symmetrical. Under these circumstances median and interquartile range are likely to give people a better understanding of the central tendency and the spread.  back ]
  5. On occasions, panel estimates converge towards two points rather than one.  My guess is that when this occurs, the estimates are influenced by both information and values.  The value differences, I expect, explain the lack of agreement. back ]


Archived resources

A brief description of adversarial, consensual and dialectical processes can be found in the archived file dialectic. The URLs are:

A description of a face-to-face version of delphi is available, named delphi. The URLs are:

Other archived resources will be listed in the following sessions which describe specific methods. 


Other reading

To provide some background into the issues of applied qualitative research, there is an interesting collection of papers contributed by some well know researchers in:

Lawler, E.E., Mohrman, A.M., Mohrman, S.A., Ledford, G.E., and Cummings, T.G., eds.  (1985) Doing research that is useful for theory and practice.  San Francisco: Jossey-Bass.

In cycling between data and interpretation, the methods of grounded theory are quite similar to what I've described.  Grounded theory is theory grounded in experience.  The theory is developed to make sense of the data.

A useful description of the best known form of grounded theory is given in:

Strauss, A.L.  and Corbin, J.  (1990) Basics of qualitative research: grounded theory procedures and techniques.  Newbury Park: Sage.

I should warn you, however, that Barney Glaser doesn't regard this as proper grounded theory.  For his strongly-argued view see:  Glaser, B. (1992) Basics of grounded theory analysis: emergence vs forcing.  Mill Valley, Ca.: Sociology Press. My own view is that Glaser's approach is more like action research than Strauss's.

If you want to access the earlier literature ...

Glaser, B.G.  and Strauss, A.L.  (1967) The discovery of grounded theory: strategies for qualitative research.  Chicago: Aldine.

As an example of applications of grounded theory (to nursing), try:

Chenitz, W.C.  and Swanson, J.M.  (1986) From practice to grounded theory: qualitative research in nursing.  Englewood Cliffs, NJ: Addison-Wesley.

This is a simple and practical account which I think you will find useful, whether or not you are researching nursing practice.

For a simple but detailed description of a standard version of delphi, read

Delbecq, A.L., Van de Ven, A.H.  and Gustafson, D.H.  (1986) Group techniques for program planning.  Middleton, Wis.: Greenbriar.

This also describes nominal group technique, another data collection process.  For more detailed descriptions and analyses, check out some of the papers in

Adler, M.  and Ziglio, E., eds.  (1996) Gazing into the oracle: the delphi method and its application to social policy and public health.  London: Jessica Kingsley Publishers.



A thought experiment

Think back to some recent times when you heard or read something that you disagreed with.  Make a list of a number of these, if you can. (Or, if you can't, begin to assemble a list over the next week.)

Which of these responses is most common for you:

"That's wrong!"

"Hmm, that's an interesting position.  Perhaps it's correct."

"I believe differently.  How can I explain how this person and I came to such different conclusions, and find out what the reality is?"

In other words, note your own use of adversarial, consensual and dialectical processes when you encounter disagreement.

An individual activity

Before you read the archived file on delphi, design your own face-to-face (or email) delphi process.

  • Take the description of a mail delphi, above, as your starting point
  • Choose a research question that is relevant to your action research interests, and which has both action and research components
  • Work out a process for running a delphi-like process face to face

You will probably want to give attention to: choosing panelists; briefing them; deciding how to collect and collate the information; your own communication style; and so on.

When you've designed a process, check how well it is likely to function as action research.  How well does it generate accurate information? How likely is it to lead to committed action?

For your learning group

Do the individual activity, above, as a group activity.  Choose a suitable example from one of your learning group members.  Help that person design a process and critique it.  Then help each other decide how you can make use, in your own action research, of what you have learned.

Part way through the activity, pause to critique your own interaction. How well are you achieving the goals suggested above: honest information, directly communicated; striving to understand what others say; using disagreement to identify where more information is needed?

In other words, how successful are you in creating a climate of mutual enquiry for mutual education?



In summary...

Using delphi as a vehicle, this session has explored a number of styles of process for information collection.  In particular, dialectic processes which assist both action and research have been addressed.

The next session explores, in some detail, an interview technique which uses a form of dialectic process.  See you then -- Bob

Let's practice action research on areol.  What ideas do you have for improving this session? What didn't you understand? What examples and resources can you provide from your own experience? What else? If you are subscribed to the email version, send your comments to the discussion list. Otherwise, send them to Bob Dick



Copyright © Bob Dick 2002.  May be copied provided it is not included in material sold at a profit, and this and the following notice are shown.

This document may be cited as follows:

Dick, B.  (2002) Dialectical processes.  Session 7 of Areol - action research and evaluation on line.





Maintained by Bob Dick; this version 11.04w; last revised 20020712

A text version of this file is available at