navbar 4Resource papers in action research

A naive philosophy of action research


This is a resource file which supports the regular public program "areol" (action research and evaluation on line) offered twice a year beginning in mid-February and mid-July.  For details email Bob Dick  or

...  in which I discuss my personal philosophy as it applies to action research (and imply that perhaps research doesn't have to wait for philosophy, but can press ahead regardless)



In response to some requests, a quick look at some philosophy.

Why not in the areol sessions?  After all, that's what some people have asked for in the past.

Each areol program is evaluated.  And, as it happens, in one evaluation some disappointment was expressed at a lack of depth.  In particular two participants said they would have liked more material on the philosophy underlying action research.

Well, I try to take evaluation comments into account.  Anything less wouldn't be consistent with what areol is trying to do.  It makes sense, too, to preserve the desired features and change the features which are disliked.

I haven't responded to this criticism by incorporating philosophical material directly in the areol sessions.  All else being equal, I can see some virtue in doing so.  However, there are other considerations.

In part, I think the space is better used for other purposes.  By far the commonest complaint in the early programs was the sheer volume and frequency of material.  In response, I've tried to keep the number of sessions to 14 or 15, and (when my travels and the sometimes temperamental listserver permit) to space them a week apart.  I think there is other material which is more important.

In any event, I don't feel qualified to include much on philosophy.  More to the point, I'm not persuaded it's all that important.

However, I'd like to respond to the comments in some way.  Here, briefly, is my own position on philosophy, and something of my reasons for deciding that in any event, philosophy is not all that crucial a consideration...

The naive philosophy of a naive philosopher

The common view

It is commonly held that quantitative research -- or at least experimental and quasi-experimental research -- follows a positivist philosophy.  On the other hand, qualitative research is often said to be constructivist.  Some writers even suggest that on these grounds we should be wary about combining the two.

(Actually, I'm not persuaded that this is all that important a distinction, at least where research is concerned.  I think the choice between reductionist and systemic is more relevant.  More of that later.)

By positivist it is meant (among other things) that the world is directly knowable.  What you see is what you get.

Science may therefore improve its knowledge of reality by an incremental process.  The methods of science allow us to reach agreement on the nature of the world, and to have some confidence in our shared perceptions when they are developed through science.

Constructivism takes a different view.  It holds that the world we experience is our own construction.

We do not experience the world directly, constructivists say.  We filter it through our senses, and through the frameworks and assumptions we use to make sense of it.  And some constructivists appear to be making stronger claims than that.

Each of us necessarily experiences a different world.  We have no way of knowing if the world we perceive is the objective world.  There are no grounds (or so some would appear to believe) for choosing any person's world over any other person's world.

For that matter, there are no certain grounds on which we can even be assured that there is an objective world.


My own view is that it is possible to develop scientific procedures which may be judged for their rigour within either philosophy.  Well designed and explained action research can be recognised as effective research by open-minded positivists and open-minded constructivists alike.

It may have to be explained in terms that make clear the situation and the research objectives.  Personally, I think that's a reasonable request of any researcher, of any flavour. 

In any event, I presume that the philosophers will take some time to reach agreement on a defensible philosophy.  I doubt that we can afford to wait for them.   1    

It may be that this optimism of mine depends upon my own philosophy.  So perhaps I should define it.  As will become apparent, I am no philosopher.  So it may be a very naive philosophy.

A naive philosophy

At the outset, let me agree that we know the world primarily through our senses.  2    I agree that even then it is further coloured by our experience and assumptions and language, among other factors.  I have no way of knowing how closely my experience matches yours.

In this, I think the constructivists and I do not greatly differ. 

Yet it seems to me that, were I to throw a brick at a constructivist, she would avoid it if she could.  I don't believe this depends on our reaching prior agreement about the nature of the brick.

I imagine this is because our senses have evolved to allow us to "make sense" of the world in ways which accord adequately with reality.  Our physical survival, throughout the long history of our species, depends upon this.  Dodging bricks is a healthy activity, by and large.

In short, I assume that there is some correspondence between the perceived world and the objective world.

When it comes to judging motives and feelings we can be less confident of our perceptions.  It may be that our psychological survival is honed by much less accumulated experience.  It can certainly be argued that our psychological survival does not necessarily prevent us from mothering or fathering offspring.


In theory I accept that I can not be certain, beyond doubt, that there is a world "out there".  I presume that there is, and that it can in part be experienced.

(In fact, as an article of faith I am very nearly certain that there is.  Rationality discourages me from making such a claim too loudly.  But if I'm wrong, and there isn't a world out there, I don't know that any great harm is done.  If there is a "real world" it seems hazardous to ignore it.)

I presume that my experienced world, though a construction, bears some relationship to the world out there -- enough that I have managed to navigate my way more or less effectively through it for over sixty years.

There may well be features of the objective world that I know nothing about.  My senses may not be equipped to perceive them.  In fact, to the extent that I can know anything, I know this is so.  Instruments have revealed aspects of the world to which we would otherwise be blind.

However, I would venture that such features have not been particularly relevant in the past to our physical survival.

I might summarise my view as follows.  The world that I perceive is in my mind.  It is a construction:  "in here" rather than "out there".  At the same time, the sensory data which help to shape it are themselves shaped by the world out there.

The resulting perception is child of two parents.  One of them, as the constructivists insist, is made of my assumptions and expectations -- my history.  The other is the objective world itself.

That this is less than entirely certain sometimes stirs my curiosity.  Otherwise it does not much bother me.

I believe that this is an adequate philosophy to use as a basis for a rigorous action research.  I can plan my iterations in the world "out there" on the basis of the world in my head.  For that matter, I don't have a choice -- the world in my head is the only world I have.

Mostly this appears to serve well enough in objective reality.  If it doesn't, I can find out.  I imagine this places me with some of the pragmatists.  If so, I'm comfortable with that.

In action research, the task is this:  to behave in such a way that I maximise the chance of finding out when my model of the world doesn't work.  In this context, "work" means predict actions which will achieve desired outcomes.

Action research and naive philosophy

There are occasions when my actions do not work as well as my plans assume.  I can then find some other way of acting.  Each action, to the extent that it is based on planning, offers some test of those plans.

I can increase the stringency of the test in a number of ways.  I can make the plans explicit.  I can do the same for the assumptions supporting those plans.  I can compare my plans and assumptions to those of others.  I can more vigorously seek out evidence of their failure than of their success.  When there are loose ends, I can try to remember and to acknowledge that this is so.

When my plans do not work, I can revise them in the light of my experience.

At the end of this, all being well, I can say:  I assumed that if I carried out behaviour B, in situation S, outcome O would result.  I have done B in situation S.  And O has resulted.  Now I am ready for the next step.

I am not claiming that B necessarily contributed to O in situation S.  I think I have built a circumstantial case for it.  But that's not the issue.  I have two interests.  One is achieving O.  The other is having some confidence that I can do so again should I wish to.  These are more important to me than proving that B caused O.

I may find myself from time to time in situations which closely resemble S, as far as I can tell.  It may happen that my doing B in these situations is consistently followed by O.  I can therefore, with some caution, assume that in the future B will again lead to O in such situations.

And if it does not, I will have learned something.

In short, I am saying that my aim is more to achieve the outcome than to prove anything.  It is useful to know how reliably B leads to O in situation S.  So I pursue action and understanding.  But if I can act effectively in the world, I can manage without certainty of understanding.

If only it were so simple.

Often, for whatever reasons, the world is less certain than this.  I may have a belief that B is often followed by O, but not reliably.  It is then useful to have some other actions, C and D and E perhaps, that are also often followed by O.

It is then likely that, if I do B and C and D and E, O is a very likely outcome.  I have no way of knowing exactly which behaviour, if any, was successful.  I have the desired outcomes, and some grounds for believing that I can achieve these outcomes again.

In fact, I may attempt to combine B and C and D and E in situations which don't much resemble S.  It is my experience that this often works quite well.

Some would no doubt regard this as providing a shallow understanding.  I agree.  I continually pursue deeper understanding.  Meanwhile, it is deep enough for practical purposes.

As I say, I would like to take it deeper.  I believe this can be done;  again, I think it is not entirely dependent on the philosophy I hold.

Taking understanding deeper

Here is the way I can try to do that...

When I attempt action, I can learn.  My later plans can be built on that learning.  The success or failure of those plans casts further light on that learning.

As well, I can make my assumptions more testable by making them explicit.  What is it about the expected situation that leads me to desire outcome O?  Why do I think behaviour B will get it for me?  How will I know when it isn't working? 

I can particularise this issue by examining in more detail the role of critical reflection in action research.  For example it can be guided before the event by these six questions:  4

1a:  What do I think are the salient features of the situation that I face?

1b:  Why do I think those are the salient features?  What evidence do I have for this belief?

2a:  If I am correct about the situation, what outcomes do I believe are desirable?

2b:  Why do I think those outcomes are desirable in that situation?

3a:  If I am correct about the situation and the desirability of the outcomes, what actions do I think will give me the outcomes?

3b:  Why do I think those actions will deliver those outcomes in that situation?

In other words, I think that critical reflection underpins both action and understanding.  Planning is an important part of this.  It prepares, to some extent, for the action to follow.

In a sense, the "a" questions (the "what" questions) pursue action.  The "b" questions (the "why" questions) pursue understanding.  In another sense, this is a somewhat artificial distinction.  Action and understanding develop together as part of the action research cycle.

Important, too, is the critical reflection which follows action and therefore precedes planning.  It identifies what I have learned from prior actions.  It can reinforce that understanding by finding ways of applying it in following cycles.

Beginning with outcomes:

Did I get the outcomes that I want?  Or, more realistically, what were the outcomes that I got, and how well do these accord with those I sought?

To the extent that I got them, do I still want them?  Why, or why not?

To the extent that I didn't get them, why not?

And this final question then returns in more details to the earlier planning questions:

1a In what ways was I mistaken about the situation?

1b Which of my assumptions about the situation misled me?

1c What have I learned?  Why different conclusions will I reach about similar situations in future?

2a In what ways was I mistaken about the desirability of the pursued outcomes?

2b Which of my reasons for favouring those outcomes misled me?

2c What have I learned?  What outcomes will I try to pursue when next I'm in such a situation?

And notice that 3a takes a somewhat different tack:

3a Did I succeed in carrying out the planned actions?  If not, what prevented or discouraged me?  What have I learned about myself, my skills, my attitudes, and so on?

3b If I did carry out the actions, in what ways was I mistaken about the effect they would have?  Which of my assumptions about the actions misled me?

3c What have I learned?  What actions will I try next time I am pursuing similar outcomes in a similar situation.

Now, to return to philosophy...

Systems thinking

As I said earlier, I'm not persuaded that the distinction between positivism and constructivism has all that much practical importance.  Why not?

Most researchers I know don't seem to have a very strong position on philosophy.  Ethics, yes.  Understanding research design, yes.  The best of them, in my judgment, think about what they do.  Yet philosophy of science does not seem to be something which occupies all that much of their attention.

This is most evident with their day to day life.  Whatever philosophy they claim, they seem to go about living in much the same way.  But I don't think that they even let it affect the practice of their research very much.

If they are applied researchers, there is an element of philosophy (I think it's philosophy) which influences their research.  It's to do with their assumptions about causal explanations.

I can illustrate the point with my own experience.  My undergraduate studies were in experimental psychology.  My masters thesis was a laboratory experimental study in the area of psychophonology.  The only research paradigm I had been taught was that of experimental and quasi-experimental research.

Then, partly by chance, I became an applied psychologist.  It did not take me long to realise that all but the best of quasi-experimentation is weak indeed.  It violates many of the assumptions it depends on.  It manages poorly many of the threats to the validity of its conclusions.  It does a very poor job of eliminating alternative conclusions.

I can go into detail, but this is probably not the place.  Suffice it to say that I was drawn by the demands of my work towards a different research paradigm.

Truly experimental designs were seldom possible.  Quasi-experimentation clearly wasn't flexible enough, and in my view wasn't rigorous enough.  Something else was needed.  And it was in qualitative research, and especially action research, that I found it. 

I think this recapitulates the development of evaluation as a discipline.  In its earlier days it was known as "evaluation research", and the methods were mostly quasi-experimental.  Now, as Cook and Shadish 5 document, it is a much more "worldly science".

Early in this process, I was working with organisations to help them improve the enjoyment (I called it "job satisfaction") that people derived from their work.  I was drawn to an important belief about causality.

Then and now, it seemed to me that, when almost everything affects almost everything else, causal models are not all that useful.  They leave too much out.  Even then they are often too complex.

Causality is still there in my assumptions.  But it is a very different form of causality.  I assume that if I do B and C and D and E in situation S, then there is a reasonable likelihood that outcome O will result.


In other words, the models I use most in practice have a particular causal form.  They seldom try to describe the precise causal relationship between narrowly defined variables.  (I have nothing against such models.  However, by themselves they don't help me much with my practice.)

The models I find most useful assume, as a working hypothesis, that certain combinations of actions increase the likelihood of certain outcomes in certain situations.  Each time I apply them in practice, I learn more about the actions and the outcomes and the situations.  And about myself.


  1. See the section in this paper on "Systems thinking".  back ]
  2. "Primarily" because some perceptions or misperceptions may be "prewired" rather than sensed.  And who knows what other intangible influences there may be.  back ]
  3. For present purposes "she" includes "he".  back ]
  4. An early version of these questions was developed in conversation with Stephanie Chee, Alan Davies, Goh Moh Heng, Richard Kwok, Leong Chun Chong, John Man, and Shankar Sankaran.  back ]
  5. Cook, T.D.  and Shadish, W.R.  (1986), Program evaluation:  the worldly science.  Annual Review of Psychology, 37, 193-232.  [ back ]



Copyright (c) Bob Dick 1997-2000.  This document may be copied if it is not included in documents sold at a profit, and this and the following notice are included.

This document can be cited as follows:

Dick, B.  (1997) The naive philosophy of a naive philosopher [On line].  Available at



navbar 4  

Maintained by
Bob Dick;  this version 2.05w last revised 20000103

A text version is also available at URL