Bloor’s strong programme
David Bloor has argued that the sociology of scientific knowledge
(SSK) should not limit itself to the study of overtly social elements of
science. Bloor thinks that only studying science’s “institutional
framework and external factors relating to its rate of growth or direction”
prevents sociologists from being able to learn about the nature of scientific
knowledge. An important obstacle to the practice of sociology of
scientific knowledge as Bloor would like to see it is the conviction that
some beliefs do not require a causal explanation. Bloor points out
that this feeling is especially strong when the beliefs in question have
the status of scientific beliefs, which are supposed to be rational, objective,
and true.
Bloor writes, “When we behave rationally or logically it is tempting
to say that our actions are governed by the requirements of reasonableness
or logic.” We think that drawing inferences according to the
rules of logic makes logic itself the cause of the belief formed.
If science proceeds according to rational rules, then truth itself is the
best explanation for scientific beliefs. On the other hand, false
beliefs are not caused by logic but by the interference of external factors.
Bloor finds this idea that true beliefs are caused by their truth while
false beliefs must be caused by some kind of interference unacceptable.
It is a violation of the requirements of his strong programme.
Bloor’s strong programme for the sociology of scientific knowledge
has four requirements. Sociology must be causal, meaning it must
be concerned with the conditions which bring about belief. It must
be impartial with respect to truth and falsity, meaning that both are in
need of explanation. It must be symmetrical; the same type of cause
should explain true and false beliefs. Finally, it must be reflexive,
meaning that its patterns of explanation must be possible to apply to sociology
itself. Bloor’s motivation for the application of these requirements
to a sociology of scientific knowledge is the success that sociological
analyses of the knowledge of other cultures have had. Sociologists
studying “primitive” or pre-scientific cultures do not apply value judgments
to certain types of belief, and this allows them to accurately describe
patterns of belief-formation. Bloor thinks that sociological analysis
can be just as profitably applied to knowledge in our culture, but we must
give up the assumptions that come with the epistemic value we grant science.
We cannot learn about the process of belief-formation in science if we
are working with the assumption that scientific beliefs are caused solely
by their truth. This would lead to a sociology of error, rather than
a sociology of science and would prevent sociology from learning about
accepted and practiced science, which is surely more important to science
as an enterprise than bad science.
The general spirit of Bloor’s strong programme is very useful
to the study of science. Three of the requirements are essential
to sociological analyses of science. We cannot assume that scientific
beliefs--which have the status of being the most true beliefs in our culture--are
uncaused because of their truth. If I form a belief to the effect
that horoscopes always accurately predict the events of the day because
all my friends believe this and tell me that it’s true, then it’s easy
to point to the cause of my belief: it is the fact that a social group
of which I am a member shares the belief. But if I form the true
belief that horoscopes do not always accurately predict the events of the
day after checking at the end of each day whether my horoscope came true,
then is this true belief uncaused? No. It is caused by my rational
comparison of the actual events of the day with those predicted in the
horoscope. It is a true and rational belief which fits with experience
and is also caused.
Similarly, true scientific beliefs must be caused. If we
say they are caused by the careful and objective checking of hypothesis
against facts, they are still caused. Certainly there is a social
element to this causation, since the goals of objectivity and carefulness
exist within scientific practice by virtue of their inherence in individuals
and their reinforcement through the particular character of the social
practices of those individuals within the relevant community. The
claim that scientific beliefs have social causes has traditionally been
thought to be a criticism of science, depending on the strength of the
formulation. But it is important to see how this can go both ways,
as I showed with the example of the acceptance of the Darwinian theory
of natural selection. The social climate of Darwin’s time facilitated
the acceptance of the good theory of natural selection, but a different
social climate might frustrate the acceptance of that same good science.
Additionally, the actual social climate of competitive capitalism may have
hindered other good scientific theories from being developed or accepted.
Bloor’s causal requirement does not, by itself, cause problems for a credit-giving
view of science. Causation--and specifically social causation--can
be either good or bad for science. We can recognize the fact of social
causation in science and still give science credit epistemologically.
The requirement that a sociology of scientific knowledge be “impartial
with respect to truth or falsity, rationality or irrationality, success
or failure” is also a necessary and beneficial one to the study of
science. Imagine an anthropologist studying the building methods
of a particular tribe. When she first begins observing, she decides
(on whatever personal basis she might have) that a certain method is the
best one (quickest, safest, and so on) , and the huts built with this method
are the best huts (sturdiest, prettiest, and so on) She forms the
conviction that this method exists in the life of the tribe solely because
of how good it is for building huts, and on the basis of this conviction
she decides that she need not study certain aspects of this method, for
example, when and how it was invented. She assumes that its effectiveness
is the only explanation for its use. This prevents our anthropologist
from learning the real details of the source of the method. She has
a prior conviction as to the superiority of the method which prevents her
from evaluating the actual performance of the method. Why should
she limit herself in any way from studying what might--precisely because
of its apparent superiority--be the most important building method to the
tribe?
This point obviously extends to science as well. Giving
science superior and unimpeachable epistemic status may interfere with
the objective study of the activity, if it means that philosophers or sociologists
of science assume that the superior epistemic status of the activity is
all that is needed to account for the particular quality of scientific
beliefs. Just as a physicist would not want to know less about a
functioning laser than one that doesn’t work, so should the study of science
not limit its range of applicability because of the (real or perceived)
epistemic value of science. It is not enough to say that Lavoisier’s
theory of oxygen prevailed over Priestley’s phlogiston theory because it
was true. There are interesting and important things to say about
the creation and acceptance of the oxygen theory that do not have to do
with the epistemic status of that theory. The impartiality requirement,
building upon the causality requirement, should be seen as part of Bloor’s
strategy to keep the sociology of scientific knowledge from becoming a
sociology of error. This is a worthwhile goal and so far, there is
still no problem for credit-giving views of science. We can agree
that scientific theories are caused and that we need to explain the causes
of both true and false scientific theories without having to stop giving
science credit in a traditional way.
The sociology of science, inasmuch as it purports to be a generalizable
science itself, must obviously be reflexive. This requirement is,
at the very least, logically necessary. If the conclusions drawn
by the sociology of science are meant to be generalizable to all of science,
and if the sociology of science is itself a science, its conclusions must
apply to it as much as any other science. But more importantly, this
requirement points to Bloor’s desire to limit how much his account should
be seen as critical of science. He gives the account he endorses
the same status as the science it describes. What is that status?
Bloor thinks there is no final, privileged state of absolute truth.
He believes that scientific knowledge is possible, but it must be conjectural
and relative. In the last pages of Knowledge and Social Imagery,
he compares the progress of science--which he states is “real enough”--to
biological evolution. There is no goal, no final end-state, though
there is development over time. Bloor happily accuses himself of
“scientism,” which he describes as “an over-optimistic belief in the power
and progress of science.” Bloor’s emphasis on ‘power and progress’
reveals him to be an instrumentalist. He believes in the existence
of an external, material world and even thinks that knowledge is connected
to the material world. But he thinks that scientific knowledge is
of relative truths determined by social context rather than one absolute
truth which reveals the real inner workings of reality.
The reflexivity requirement is not threatening to Bloor’s instrumentalism,
nor is it threatening to a credit-giving view of science. Critics
coming from a credit-giving standpoint often argue that positions like
Bloor’s are inconsistent because of reflexivity. They say that if
the sociology of scientific knowledge reveals science to be socially constructed
rather than about the real world, and if this revelation applies to SSK
itself, then the claim that science is socially constructed is not a revelation
at all but is itself a social construction. Since it is only a social
construction, it does not actually tell us anything about science.
But because Bloor does not think that science needs to reveal the truth
about the real world to be good science, this argument holds no force for
him. Turning the claim that scientific theories are social constructions
back on the theories of the sociology of scientific knowledge simply puts
them on a par with the rest of science. Bloor thinks we should evaluate
them on their utility and internal coherence, just as we should with other
scientific theories, and give up the pretense that science tells us about
an independent, real world. Calling a theory a social construction
is not, for Bloor, a criticism or a debunking claim.
However, on a view of science which ascribes truth to good scientific
theories, the objection will have force, because a question of fit to reality
does matter for scientific theories on such a view. If the sociology
of scientific knowledge indicates that all science is socially constructed
and therefore not representative of the structure of reality, then this
claim applies to the sociology of scientific knowledge itself. This
means that the claim about the status of science does not have to do with
the fact of the matter about whether science is socially constructed or
not, so we do not have to take the theory into account when forming beliefs
about the epistemological status of science, and SSK has argued itself
out of significance. Note that the proceeding argument holds whether
SSK is right or not (in terms of fit to an independent reality).
If it is right, then we can’t know that it is right through the theory,
because the theory is only a social construction the accuracy of which
will only be accidental. On the other hand, if it is wrong, we can
stop taking it seriously before the logical argument even gets under way,
because wrong scientific theories do not tell us about reality. What
the proponent of a credit-giving view of science needs from SSK is that
it indicate that the content of true scientific theories does have something
to do with an independent reality. If science does tell us about
an independent reality, and the sociology of scientific knowledge is good
science, then it must reveal that--and how--science tells us about reality.
This means that--since proponents of SSK tend to argue that it tells us
that science is not about an independent real world--for credit-giving
views of science to be right, the SSK theorists must be wrong. The
reflexivity requirement itself is not a source of conflict between social
constructivism and credit-giving views of science, although it does show
how each side has an interest in the outcome of the undertaking of the
sociology of scientific knowledge.
The symmetry requirement is where a naturalized and credit-giving view
of science and Bloor’s strong programme are in significant conflict.
The requirement is that “the same types of cause would explain, say, true
and false beliefs.” What does this mean within Bloor’s theory?
He sees it as a natural step to take after the first two requirements.
We’ve agreed that beliefs are caused and that both true and false beliefs
require explanation, and it follows (for Bloor) that both true and false
beliefs require the same kind of explanation. This amounts to assuming
that both true and false beliefs have the same kind of cause, and so the
same kind of explanation will adequately deal with both. But how
much does the ‘same’ of “the same kind” constrain the possibilities for
causes of belief? Keep in mind that Bloor thinks that theories are
“conventional instruments for coping with and adapting to our environment.”
He thinks that beliefs are constrained by the observable material world,
and then are shaped by utility and social convention. In line with
this, I think it is safe to say that Bloor thinks that beliefs are caused
by sensation of the material world, judgments of utility, and fit with
convention.
It will be helpful to go through some examples to see how this
symmetrical causation works out for Bloor’s instrumentalist theory.
We can then examine if the same symmetry requirement will cause problems
for a credit-giving view of science. First, for Bloor, both true
and false beliefs can be caused by sensation of the material world.
A false belief in spontaneous generation is caused by observing mold emerging
on food without the food being inseminated or touched in any way by any
existing mold. A true belief that cases of apparent spontaneous generation
are caused by microscopic, airborne spores can also be caused by observing
the lack of mold growth on food kept in an airtight container and by observing
mold spores through a microscope. So we can see how corresponding
true and false beliefs can both by caused by observing the material world.
Utility in scientific theories means predictive and manipulative
power. Theories which have the same predictive and manipulative power
will be epistemologically equivalent on Bloor’s instrumentalist view.
However, if the theories differ in levels of internal coherence, there
will be grounds to prefer one over the other. For the instrumentalist,
theories which are coherent within themselves, with other scientific theories,
and which have the minimal match to the observable world are more useful
(Bloor says it’s alright to substitute ‘true’ for ‘useful’). A judgment
of utility can cause a true belief. Imagine simply increasing the
accuracy of a particular measurement, such as defining the measurement
of the speed of sound in air five more decimal places. This will
increase predictive and manipulative power and will not threaten the coherence
of physics. Thus a judgment of utility and theory coherence leads
us to accept a true belief. But Priestley surely thought that his
phlogiston theory would increase the predictive and manipulative power
of chemistry and would be coherent with established chemistry. He
turned out to be wrong. Lavoisier’s oxygen theory is much more coherent
with the rest of chemistry. This is a case where a judgment of utility
apparently lead to a false belief.
Finally, we have fit with convention. The scientists following
Lysenko, assuming that they genuinely believed their theories, formed false
beliefs by making them fit with the convention of the scientific climate
at the time and place. These theories are, for Bloor, false because
they did not survive when put up to the tests of utility and internal coherence.
On the other hand, the categorization of life forms into animal and vegetable,
a conventional distinction for many people, lines up for the most part
with true, scientific categorizations in those two kingdoms when information
from evolutionary theory and genetics are taken into account.
These are my own examples which I offer to try to show how Bloor’s
idea that true and false beliefs have the same kind of cause might work
out for his view. In each case, the same kind of cause is shown to
lead to both true and false beliefs. But are the causes all really
the same? In the example of sensation of the material world, it was
a limited and relevantly careless observation--one without a microscope--that
lead to the belief that the mold was coming from nothing. In the
example of theory coherence, the judgments of utility are not the same
in every respect. Priestley was mistaken in his judgment that the
phlogiston theory would be more coherent with the rest of chemistry.
In the Lysenko episode, the scientists who form false beliefs on the basis
of convention are forming false beliefs on the basis of the wrong convention:
Lysenko’s rather than that of the longer-lasting international scientific
community. In each case, the causes are of the same type--they fit
into the same broad category of belief-forming process, but there are significant
differences in the causes which lead to the differences in the truth or
falsehood of the resultant beliefs. This is true even on Bloor’s
instrumentalist view.
The symmetry requirement, if it is to fit in with anything like
a correspondence theory of truth, must be weak enough to mean that both
true and false beliefs are both brought about by the same type of cause
only when the types of cause involved are categories as broad as sense
perception and judgments of utility. Differences in the manner of
causation within these categories (like the difference between seeing no
spores with the naked eye and seeing no spores through a microscope) then
become relevant to the truth or falsehood of the belief being caused.
But Bloor is not arguing for a view that is anything like a correspondence
theory of truth. Instead, he suggests a consensus theory of truth
in line with his instrumentalism.
In defending himself against an accusation that the symmetry
principle makes him an idealist because it does not allow the causal influences
of the subject matter of belief, Bloor says that facts (states of the objects
in the world) “will in general impinge equally on those who have true and
those who have false beliefs about them.” Bloor says that both
Priestley and Lavoisier saw the same objects in the world, which were the
causes of both scientists’ beliefs. But the fact in the world and
the experience of it, argues Bloor, is not enough to account for the explanation
that either scientist gives. This is surely the case, if we deal
with only the initial situation. Lavoisier did not see oxygen any
more than Priestley saw phlogiston. Both scientists made up their
theories. However, Bloor may not have picked the best example.
The formation of water droplets inside the jar in Priestley’s experiment
could not be accounted for by his phlogiston theory, where as seeing the
substances involved as oxygen and hydrogen, which were later learned to
be the components of water molecules, makes the formation of water droplets
unproblematic for the oxygen theory. Observation aided in the differentiation
of and choice between the two theories. So we see that both true
and false beliefs are generated in the general category of “causation by
observation of experiment.” But it turns out that Priestley’s false
beliefs came about when he did not notice a relevant aspect of the results
of the experiment. So within the category, we need to make room for
failing to observe or appreciate the importance of certain phenomena, and
it is this process, along with other faulty belief-forming processes, that
leads to false beliefs.
Generally, the symmetry requirement is a good one when it is
construed so broadly as to differentiate only between very general categories
in belief-forming processes. When it is construed to mean that the
causes can be similar in all respects or can even be the same cause, it
leads to problems with a correspondence view of truth. Sense perception
generates both true and false beliefs, but accurate and sufficiently careful
sense perception should yield true beliefs, while mistaken or careless
sense perception will yield false beliefs. Bloor’s treatment
of examples like that of Priestley and Lavoisier mentioned above indicate
that he means his symmetry requirement to be the stricter version.
This will force him to describe the truth or falsehood of beliefs as having
nothing whatsoever to do with the circumstances of their formation, making
science an endless context of (social or pragmatic) justification.
This means that assessments of truth--assuming fit to the observable material
world is achieved--will be dependent on historically contingent judgments
of utility or fit to social convention. It means that truth is relative
and dependent--beyond a minimal fit to the observable world--on the social
characteristics of the society in question. This is exactly what
Bloor wants the symmetry principle to do, and it is exactly what a view
which gives credit to science in a traditional way does not want a theory
of scientific knowledge to do.
A credit-giving view of science, which we have already seen can
accommodate the causality, impartiality, and reflexivity requirements of
the strong programme, can also accommodate the more broadly construed version
of the symmetry principle. The same kinds of processes can cause
true and false beliefs, and likely candidates for the list of processes
which cause beliefs in science are sense perception, theory-based interpretation
of experimental results, and more abstract theory or model construction.
All of these processes can generate either true or false beliefs, but that
is not the whole story. Within one of these categories, true and
false beliefs must be generated by significantly different instances of
the category. I have shown how this would go in sense perception
with the spontaneous-generation example. The sense perception involved
in endorsing the theory of spontaneous generation is incomplete in a relevant
way, that is, it is based on observation with the naked eye rather than
making use of the necessary microscope. The phlogiston/oxygen episode
is an example of theory-based interpretation of experimental results going
both ways. The difference there was the failure to notice all the
relevant results of the experiment.
The task in abstract theory-construction is to create a theory
that fits with the accepted facts and accumulated observations. Theory-construction
which does not take into account certain relevant facts or observations
for any reason will fail, while theory-construction which successfully
accommodates relevant facts and observations will not. In general,
while the same process may generate both true and false beliefs, successful
applications of it will yield true beliefs while unsuccessful applications
will yield false beliefs. What counts as a successful application
of a belief-forming process will factor in my positive account of the ability
of science to discover truths about an external and independent reality.
Briefly, the idea is that applications of belief-forming processes which
allow the actual fact of the matter in the world to be a significant causal
factor in the character of the belief will be successful and will yield
true beliefs.
Microsociological case studies of laboratory science
In the mid to late 1970s, several case studies of science were undertaken
by people sympathetic to the strong programme to various degrees.
After the articulation of the macrosociological strong programme, many
people became interested in the corresponding microsociological aspects
of science. Microsociology lends itself well to the use of a case-study
method, and people interested in SSK rightly reasoned that empirical data
from the practice of science should be collected to test the accuracy of
sociological theories. Bloor’s strong programme especially seems
to call for the empirical study of science, with its emphasis on causes
and its insistence on the possibility of identifying the causes of scientific
belief. On a microsociological scale, studies were done in which
people went in to laboratories as participant observers to gather data
about the day-to-day practice of science. Although these studies
have a difference in scale from Bloor’s work, the theoretical commitments
of social constructivist case studies can be seen as following directly
from the strong programme. Specifically, the denial of the social/evidential
distinction, a concept which is central to the present project, can be
seen as an extension of the broad version of the symmetry requirement.
I will come back to this connection and discuss it in more detail later.
The people behind these studies can variously be identified as
philosophers, historians, sociologists, and anthropologists. The
first wave of studies, although all undertaken at about the same time,
were apparently initiated in each case without the scholar being aware
of the others. The first to be published, and still one of the most
frequently cited, was Laboratory Life, by Bruno Latour and Steve Woolgar,
originally published in 1979. Karin Knorr-Cetina’s The Manufacture
of Knowledge is also very influential, and there is a host of other studies
out there, from which I will deal with Michael Lynch’s Art and Artifact
in Laboratory Science.
It will be instructive to take a look at these studies and note
some aspects of their arguments and conclusions. I will examine Laboratory
Life in the most detail because it seems to be the most influential and
because it is a strong and direct endorsement of the view that I am calling
into question in this project: social constructivism. I will examine
The Manufacture of Knowledge as an example of the same class of which Laboratory
Life is a part in the hopes of checking possible idiosyncrasies of one
approach against the other. Art and Artifact in Laboratory Science
is an example of a case study which does not make social constructivist
claims about science. It is also a source of arguments against aspects
of the methodological stance of the social constructivist studies which
call their conclusions into question. In what directly follows I
briefly present the arguments in these books, withholding criticism for
later.
Laboratory Life: The Construction of Scientific Facts argues
for a social constructivist view of science by appeal to evidence gathered
in a microsociological case study of a biology laboratory done by a participant
observer. The main argument of the book is that scientific facts
are constructed rather than discovered. The authors attempt
to prove this claim by collecting evidence about the day-to-day activities
of scientists and by breaking down a series of dichotomies throughout the
book. In Chapter 1, Latour and Woolgar avoid using the distinction
between social and technical factors in science. They say that this
dichotomy is problematic because it prohibits the sociologist from studying
anything that is not obviously social. Latour and Woolgar want to
be able to study the actual process of science, which includes elements
not normally classified as social. They call attention to the social
character of everything that goes on in the laboratory. They also
argue that the social/intellectual distinction is used as a resource
in scientific work, and so it would be problematic to unquestioningly accept
it in a study of that very scientific work.
In Chapter 2, Latour and Woolgar call into question the distinction
between facts and artifacts. They argue that facts only achieve their
status as such through a process of literary inscription, where different
formulations and usage in conversations and publications are what give
a statement its “facticity.” The motivation behind the destruction
of this dichotomy is the authors’ idea that a fact is created in the agonistic
behavior of scientists. Scientists have certain rhetorical or political
strategies which are meant specifically to increase the facticity of the
statements they make. If statements do aquifer fact status in this
way, then the separation between facts as having some referent to reality
independent of human agonistic activity and artifacts as things created
by people is problematic. For Latour and Woolgar, facts turn out
to be created by people just as much as artifacts, rather than being determined
by the character of an external reality.
Chapter 3 is devoted to the process by which the structure of
thyrotropin releasing factor (TRF) became known. In this chapter,
the authors challenge the distinction between internal and external factors
in the elaboration of a scientific fact. They argue that the difference
between internal and external factors exists only as a consequence of the
establishment of a fact. This is the material which Latour and Woolgar's
most radical conclusions are based upon. The general process is described
more fully in Chapter 4, “The microprocessing of facts.” The authors
argue that when a fact begins to stabilize, it splits. There is then
the fact as a set of words which is a statement about an object and there
is the fact which constitutes the object itself through its facticity.
The fact inverts itself so that the object becomes seen as the source of
the statement, rather than the other way around. That a scientist
thinks that the particular character of something she knows is due to the
nature of reality is due to this inversion. Latour and Woolgar say
that the character of reality is derived from this process of fact construction.
This is the process by which ("external") facts are created, and this account
is the main support for Latour and Woolgar's contention that reality is
created by scientific activity. The claim about the construction
of reality by scientists is the most striking and radical conclusion of
the book. It is explicitly stated over and over in Chapters 4 and
6. Just a few examples are: “It is a small wonder that the statements
appear to match external entities so exactly: they are the same thing.”
“‘Reality’ cannot be used to explain why a statement becomes a fact, since
it is only after is has become a fact that the effect of reality is obtained.”
“Nature is a usable concept only as a by-product of agonistic activity.”
The picture of science that Latour and Woolgar paint is not one
of scientists discovering facts about a independent world, it is one of
scientists imposing order on disorder. They contend that the world
and data collected from it are so disordered that any account which is
ordered is entirely constructed. Order is created by scientists;
it is a fiction they impose on the world. The analogy that Latour
and Woolgar leave us with at the end of the book is one comparing scientific
activity to the strategy game, go. At the beginning of the game,
nearly any move is possible, but as the game progresses, the agonistic
activity of the players limits the moves that can be made. This illustrates
the move from contingency to necessity. It shows that Latour and
Woolgar don't think that just any fact can be science, but that the limiting
factors are social--agonistic--rather than environmental.
The anthropological approach: assuming strangeness
Common to the case studies being discussed is an anthropological approach.
That is, the method used to study the activity of scientists was explicitly
modeled on the method of anthropologists studying “primitive” societies.
The idea is that we can learn about science by observing the day-to-day
activities of laboratory scientists. The studies I will discuss have
differing stances on the question of how much we can learn about science
from such microsociological case studies. Laboratory Life and The
Manufacture of Knowledge both make broad conclusions about the status of
science as a source of knowledge, while the idea that a single case study
cannot be effectively generalized upon is central to Lynch’s treatment.
There are other significant methodological and interpretive differences
between Lynch and the others that will come into play as well.
Latour and Woolgar claim that the majority of the material on
which their discussion is based “was gathered from in situ monitoring of
scientists’ activity in one setting.” They call their project
“anthropology of science,” and one of the reasons they mention for their
use of ‘anthropology’ is that it
denotes the importance of bracketing our familiarity with the object
of our study. By this we mean that we regard it as instructive to
apprehend as strange those aspects of scientific activity which are readily
taken for granted.
This position is connected to Bloor’s impartiality principle.
Part of the familiarity with science that Latour and Woolgar are bracketing
is its image as the paradigmatic producer of truth in Western culture.
They refuse to allow the accepted epistemic value of science to affect
their discussion of the activity. This might seem as if it is meant
as a device to keep established notions about science from interfering
in the collection and analysis of data from the observation of science.
To someone who thinks an important part of the scientific method is guaranteeing
objectivity and that this kind of device would ensure an objective appraisal
of the facts as they are discovered, this may sound like good methodology.
But Latour and Woolgar don’t think that an objective appraisal of independent
facts is what this guarantees, since they don’t believe in independent
facts at all. They think that a scientific account--indeed, any account
--is constructed, and its virtues stem not from its connection to independent
facts, but from its ability to convince--essentially, its potential for
popularity. The idea that the theory that convinces the most people
is the best one and the desire to create an alternative to the mainstream
view of science seem to be in conflict, since the theory that convinces
the most people is the mainstream view of science, but both notions are
present in Laboratory Life nevertheless. This is one of many apparent
conflicts in the book. Of these, the most significant to me are the
problem with reflexivity that is common to relativistic arguments in general
and the fact that the evidence presented has little bearing on the character
of the conclusion, which seems to come almost entirely from prior convictions.
My purpose is not to offer a general criticism of Laboratory
Life, however. I want to focus on the elements of its argument that
are common to other case studies of laboratory science and are detrimental
to the success of that enterprise in systematic ways. The first of
these problematic elements is the attitude of strangeness mentioned above.
This a specific aspect of the anthropological approach that is apparent
throughout Latour and Woolgar’s discussion. The narration of the
observer’s experience in Chapter 2 is the story of someone entering a strange
and confusing world and trying to make sense of what he sees. The
observer purposely pretends not to know what the workers in the laboratory
are doing--even at so general a level as to know that they are doing biological
research and publishing their findings. But at the same time, the
observer really doesn’t know much about natural science, and he knows even
less about the field of neuroendocrinology which is the laboratory’s specialty.
The authors think this is a good thing, since it fits in with their concern
about the “dangers of ‘going native.’”
Latour and Woolgar describe scientific instruments as inscription
devices, that is, machines that take in a substance and produce a number
or graph on a sheet of paper. To them, the instrument is a
black box which creates inscriptions. The connection of the
inscription produced to the original substance is assumed by the scientist,
but is something that Latour and Woolgar are skeptical of. This is
where the anthropological approach becomes problematic as a method of studying
science. An ignorant observer who walks into a laboratory and sees
technicians putting a substance into a machine and getting a graph out
assumes that the machine transformed the substance into the graph.
She knows nothing about what went on in the machine to produce the graph,
and so the connection of the original substance to the graph is unknown.
The anthropological approach, with its assumption of strangeness and emphasis
on ignorance, prevents the observer from evaluating the place of instruments
in the science under study. While someone familiar with the physics
involved can recognize a mass spectrometer and understand how it produces
the reading that it does, the ignorant observer only sees a complicated
machine drawing a series of lines on a piece of paper.
One of the ways that Latour and Woolgar try to show that scientific
facts are constructions is by calling attention to the situation of people
putting substances into machines and accepting the numbers or graphs that
come out as equivalent to (although easier to read than) the original substance.
These inscriptions are taken to be representative of the substance measured,
but Latour and Woolgar contend that they are only “fictions” which have
an indeterminate relation to the original substance. In this case,
an emphasis on ignorantly observing science as it is practiced in its original
context is a hindrance to evaluating its epistemological status (in terms
of whether scientific beliefs are determined by chains of evidence or mere
convention). Indeed, the observer’s position of ignorance about the
science under study completely prevents her or him from being able to judge
whether the use of an instrument of sufficient complexity involves gathering
significant evidence or is just some kind of convention of “seeing what
comes out of the machine.”
This problem reveals the anthropological approach to be limited
in an important and systematic way. Studying science through case
studies of particular laboratories certainly cannot be the only way of
learning about science. The importance of past work cannot be ignored.
Knowing about the history of a certain instrument, and being able to assess
its scientific validity (how it does what it does and how well), is relevant
in the study of the use of that instrument. Latour and Woolgar’s
suggestion that instruments don’t tell us what we think they do seems plausible
only in the context of their methodological embrace of ignorance.
Their account does not threaten the power of instruments--from simple balances
to mass spectrometers--to measure what they are meant to nor does it call
into question the possibility of using such instruments in the present
practice of science. In the case of the balance, the way that the
measurement happens is quite apparent. The measurement consists of
a comparison of the mass of the substance to objects whose mass in known.
It is clear that the balance is even when the masses on either side are
equal. The mass spectrometer, however, is much more complicated,
and it is easy to see how one might look like a black box to a non-physicist.
But it, too, can be explained. The theory behind it and the
technology used to create it are both built up from simpler and more obvious
elements, elements more like the spring scale.
Knorr-Cetina espouses a similar view of science in The Manufacture of
Knowledge: An Essay on the Constructivist and Contextual Nature of Science.
She emphasizes the importance of ethnographic method, which, as she understands
it, is a “sensitive” methodology. It depends on participating in
the culture being studied and describing it with its own concepts; letting
it speak. This methodological standpoint, as opposed to one
which emphasizes neutrality, is meant to offer a more intersubjective account
of a culture, rather than a detached one which depends on the observer’s
own categories.
The majority of The Manufacture of Knowledge is devoted to discussing
and presenting evidence for the contextual character of science.
By ‘contextuality,’ Knorr-Cetina means that decisions in the laboratory
are made in a specific time and place as responses to a specific situation.
She writes, “the products of scientific research are fabricated and negotiated
by particular agents at a particular time and place . . . these products
are carried by the particular interests of these agents, and by local rather
than universally valid interpretations.” As for the constructivist
nature of science that is also promised in the subtitle, Knorr-Cetina suggests
a mechanism that is supposed to make the operations of science constructive
rather than descriptive. That mechanism is selectivity.
Selectivity, as Knorr-Cetina uses it, is the process by which scientists
make successive selections--decisions about how to proceed in the laboratory.
Each selection is affected by those before it in that previous selections
determine the options available for the decisions made after them.
This phenomenon gives rise to several orders of selectivity, with higher
orders partially dependent on lower orders. This picture corresponds
quite well with Latour and Woolgar’s image of the move from contingency
toward necessity in the game of go. The point of this account as
Knorr-Cetina describes it is that it allows for a constructive model of
scientific inquiry. It provides for a mechanism by which science
progresses which does not depend on the idea of an increasingly accurate
picture of the external world, but rather a series of decisions by scientists
which affect each other but need make no reference to external reality.
It is important to note, however, that the presence of the mechanism
of selectivity cannot by itself mean that scientific activity is constructed.
It would take a further claim about the character of that selectivity as
constructive to show that. Two stories about science--one portraying
it as constructive and the other as descriptive--can both be told in a
way that exhibits the selectivity Knorr-Cetina refers to. Scientists
trying to come up with the chemical structure of a substance are pushed
by their administrative leaders toward a certain result because that result
will mean more funding for the group. A decision by scientists to
do what they’re told will constrain the possibilities at a later stage
for their account of the structure in a way consistent with the mechanism
of selectivity and the idea that the fact about the substance’s structure,
once it is elaborated, is constructed. On the other hand, imagine
scientists trying to isolate the chemical structure of a substance and
gradually eliminating more and more possible structures through the comparison
of the known properties of those structures and the properties of the substance
in question. In this story, a decision at one stage to eliminate
a particular structure or class of structures made on the basis of good
evidential reasoning will constrain later decisions about what structures
should be considered as candidates in a way consistent with Knorr-Cetina’s
mechanism of selectivity and a picture of scientific inquiry as descriptive.
As I mentioned above, Knorr-Cetina does not focus on the constructivist
claim in the bulk of the text, and most of her research as a participant-observer
is meant to support her picture of science as contextual. Because
of this alternative focus, she does not present an explicit, in-depth account
of how selectivity actually goes in the laboratory (her discussion of the
contextual nature of science is meant as support for and elaboration of
the concept). To examine selectivity more closely and in terms of
the present discussion, think about what it would take for this selectivity
to show scientific inquiry to be constructive rather than descriptive.
It would take, in any given case of an individual making a decision, being
able to conclude that the decision was not made the way it was because
of straightforwardly evidential concerns, but because of reduction
by previous selections of viable choices--where these previous selections
can also be shown to be determined other than by evidential concerns.
Although selectivity should not be taken to mean that the character of
any current choice is completely determined by previous choices, it seems
to me that the character of previous choices may facilitate or make likely
similar characteristics in current choices. If the field of optics
was developed in accord with evidential concerns--at each stage decisions
were made in response to evidence about the phenomena rather than being
constructed solely or primarily through non-evidential social negotiation--and
microscopes are one product of the activity in the field of optics, then
the use of a microscope in an evidential way is facilitated by the evidential
character of the activity that comes before that use. If, on the
other hand, optics developed through responses to fashion (for example,
if eyeglasses and microscopes were designed primarily in accord with the
style of the day rather than the way light behaves) then basing a decision
on the use of a (fashionable, but not functional) microscope will not facilitate
evidential reasoning. It will instead make it likely that the outcome
of the decision is constructed in a way similar to the way the field of
fashion-optics was. Of course, it may not be possible to characterize
past activity as entirely evidential or non-evidential in any given field
and there are different ways which the two might combine. But the
point is that recognizing that selectivity itself is at work in a field
does not indicate anything about whether the activity in that field is
descriptive or constructive.
Let’s get back to my question as to why someone would think that
the presence of selectivity means that science is constructive. We
have seen that selectivity allows for it. If we identify the presence
of selectivity in the practice of science, one of the possibilities for
the way that selectivity works is through a process of social construction.
But it would certainly look like it were social construction if an observer
evaluating one instance of decision-making could not judge the amount of
evidential concern with which the previous decisions leading up to it were
made. Though she does not emphasize an attitude of ignorance
in the anthropological approach explicitly, whether Knorr-Cetina’s notion
of selectivity entails constructive decision-making in scientific practice--in
the absence of other empirical evidence supporting the hypothesis--depends
on ignorance about the character of previous selections. As with
Latour and Woolgar, we can imagine the ignorant observer watching scientific
activity and recognizing that present activity depends on previous decisions.
Knowing nothing about those previous decisions, she cannot assume that
they are evidential. If the outcomes of past decisions are constructed
and present decisions are based on those outcomes, then the outcomes of
present decisions must also be constructed, she reasons. However,
if previous selections can be shown to be based on evidence rather than
being constructed through a process of non-evidential social negotiation,
then any current selection’s dependence on those previous selections is
not evidence that the decision made is constructive in nature; in fact
that dependence will facilitate a present decision based on evidence.
Knorr-Cetina uses selectivity as a mechanism which allows for the possibility
that scientific inquiry is constructive. She does not, however, prove
that the kind of cumulative decision-making that selectivity refers to
actually is constructive. The question cannot be decided by a study
using the anthropological approach as described by Latour and Woolgar because
of the inability of an ignorant observer to evaluate previous selections
(established theory and technology, for instance). This is not to
say that the question cannot be decided, or even that the question cannot
be decided in the sociology of science. I only maintain that it cannot
be decided by microsociological case studies which depend on the attitude
of anthropological strangeness. Latour and Woolgar explicitly take
an attitude of strangeness in order to prevent themselves from being socialized
into the science “cult” they are studying, and it has the effect of black-boxing
much of the activity of their subjects to the authors. Knorr-Cetina
does not explicitly take an attitude of strangeness, but her treatment
of selectivity shows that she must assume one or something very similar
for selectivity to indicate that decision-making is constructive.
It is not obvious, when looking at scientific practice, that--for instance--reading
a measurement from a mass spectrometer is constructive or arbitrary rather
than evidence-based and meaningful. In fact, if we take the way the
members of the society under study conceptualize such an activity into
account--a methodological recommendation of actual anthropology that Knorr-Cetina
herself mentions --it seems that we have reason to believe that such an
activity is evidence-based. The people who are engaged in the practice
of the activity under study think that reading a measurement from a mass
spectrometer is an evidential process. They know more than the intentionally
ignorant anthropologist by definition and are considerably likely to know
more than other anthropologists.
Michael Lynch addresses this issue in Art and Artifact in Laboratory
Science. The book presents a case study of science that proceeds
by applying a sociological analysis of “shop work” to the practice of laboratory
science. It also examines the production of agreement in conversational
exchanges in the laboratory through an analysis of “shop talk.” The
theme of the book is that laboratory science is a craft performed by common-sense
reasoners using unformulated rules of thumb. In regard to the
issue of taking an attitude of “anthropological strangeness,” Lynch insists
that an observer of science must be familiar with the field under study
to effectively learn about the practice of it. He claims to have
a basic knowledge about neurobiology, the subject of his study, he devotes
a section of his book to explaining important elements of the field for
the reader’s benefit, and he expresses frustration that he does not know
more. Specifically, he argues that the observer must “‘know what’s
going on’ in the setting prior-to and simultaneous-with the analysis of
social or interactional actions in that setting.” To put the
point in terms I have been using, it is integral to Lynch’s account that
being ignorant of the science under study is detrimental to the production
of a useful account. He takes a position directly opposite Latour
and Woolgar in saying that knowledge about the special science that is
being observed is a resource in the process of observation and the elaboration
of an account.
One of the ways that Lynch’s position on the attitude of strangeness
factors in his account is that he is able to evaluate the use of complicated
instruments. His use of the notion of ‘artifact’ depends on this.
Lynch follows the traditional story about artifacts in science; he maintains
that artifacts are the result of mistakes in method or external factors
affecting the execution of an experiment. A bubble in a microscope
slide is an example of an artifact. The slide is being used to find
out about microorganisms, but the bubble--which is the result of, say,
too much haste in slide preparation--does not indicate anything about the
microorganisms the use of the slide is aimed at finding out about.
It is rather the result of an error in the performance of an experiment.
For Lynch, the science (looking at the slide to find out about microorganisms)
is still descriptive while the artifact (the bubble) can be seen as constructed.
Being able to isolate artifacts and either explain them away or actually
get rid of them by doing the experiment again with an eye to avoiding the
problem is important to the possibility of science being descriptive.
In the example of the slide with a bubble, we can say that the other things
that show up on the slide are not bubbles or other kinds of artifacts,
and so actually tell us something about what we are trying to find out
about. Or we can make another slide more slowly and make sure there
are no bubbles on it, because the presence of an artifact on a slide does
not condemn the process of learning about the world through microscope
slides.
This use of ‘artifact’ is clearly opposed to Latour and Woolgar’s
primary use of the term. They insist that all facts are artifacts,
that any one element of science is just as constructed as another.
This difference between the two views corresponds to the difference I have
pointed out in their positions on ignorance. Imagine Latour and Woolgar
studying someone using a microscope. They would have no understanding
of the way a microscope works or what it does (and they would, a fortiori,
have no way of assessing how well it does what it is supposed to do).
They would have to say, because of their insistence on ignorance, that
the pattern of little blobs a technician or scientist sees when looking
through a microscope is completely disordered and any belief that is based
on it is entirely constructed. But Lynch can look at the same situation
and say that the microscope works in such-and-such a way because of findings
and developments in the field of optics, and that certain microorganisms
and other things that might appear on a slide can be identified and discriminated
with practice and familiarity. On this account, because of knowledge
and experience, a technician can look at a slide and recognize which of
the little blobs tell her about the part of the world she is trying to
find out about, and which are artifacts. And because of his insistence
on knowing about the science under study, Lynch can recognize that that
is what she is doing.
It is significant that Lynch’s position on this issue proves
the field of case studies of science to be less homogenous than it might
look at first. Although studies with social constructivist conclusions
certainly dominate, the fact that someone who has done equally close work
with the members of a scientific laboratory does not jump to social constructivist
conclusions casts some doubt on the inevitability with which many seem
to think that close, empirical study of the practice of science will support
constructivism. What conclusion does Lynch make about the cognitive
status of science generally? Exploring the answer leads us to the
second significant difference between him and the other case studies that
I will deal with in this project, their stance on generalizability.
The possibility of generalization from situated case studies
Can information gathered in microsociological case studies tell us
about science in general? Latour and Woolgar defend the generalizability
of their study by pointing out that one of the members of their laboratory
won the Nobel Prize in medicine shortly after Latour and Woolgar began
preparing the manuscript for Laboratory Life. This prevents
the possible objection that they just happened to choose an intellectually
poor laboratory which is overly dominated by social factors and would not
be a fair example upon which to base judgments about the science in general.
In terms of their debunking objective, this works out well for Latour and
Woolgar. They have shown not just any laboratory to be dominated
by social construction, but an exemplary laboratory which won what is arguably
the most esteemed prize in science for the work that Latour and Woolgar
describe. Other than this defense of their laboratory as exemplary
science, Latour and Woolgar do not address the question of how much the
study of one specific laboratory in one specific special science can tell
us about all of science. It is clear from their contention that their
sweeping epistemological claims about science are justified by the data
gathered in observation that they think generalizing upon one case study
is acceptable. I suggest that it is far from obvious that this kind
of generalization is possible, and that the use any such generalization
in an argument needs an independent defense of its validity.
Knorr-Cetina offers little more defense of her generalization
from one study than do Latour and Woolgar. She writes, “The well-disposed
reader may want to remember that these observations have been conducted
with a handful of scientists in one problem area at one research laboratory
(the ill-disposed readers will recall this on their own).”
The ill-disposed readers had better be able to recall this on their own,
because there is no other mention of it in the book. Knorr-Cetina’s
concluding chapter lists the major theses of the book, all of which are
entirely general in scope and are surely meant to refer to all of science.
There is no qualification of these conclusions, no reminder that they are
based on a case study of one laboratory group. This causes some conflict
within her account--when the criterion of reflexivity is applied--in that
she calls a great deal of attention to the local, situated character of
science. She argues that scientific inquiry proceeds according to
local, contextual concerns, implying a problem with the putatively universal
character of scientific results. If she is right that science is
hopelessly local, she should recognize that her own study is as well.
Unlike Latour and Woolgar and Knorr-Cetina, who do not directly
argue for the generalizability of their study but do not hesitate to make
broad conclusions about the epistemological status of science, Lynch explicitly
discusses the possibility of generalizing from a case study. He thinks
that it cannot be done. He argues that “it would do violence to the
situated competence involved in the practical use of artifact accounts
for this writer to adopt or invent a principled position on the general
problem of science’s relation to its subject matter.” This
statement is oriented to Lynch’s specific discussion of the way artifacts
are dealt with in laboratory science and depends on his idea that practical
competence in a certain special science allows scientists to explain away
artifacts as not part of the phenomena they are studying. Lynch thinks
that examples can be found which support all of the usual positions, including
credit-giving views and social-constructivist views.
It is far from obvious that meaningful generalization from one
case study is possible. In accounts which do generalize, an argument
as to the validity of the generalization is needed. Neither Latour
and Woolgar nor Knorr-Cetina offers such an argument. Lynch’s
concerns, however specific they may be to his account, are an indication
that one should proceed with caution when generalizing. It seems
to me that the question could go either way, at least in regard to the
simplest version of it, which I see as a question of typicality.
If the “anthropologist” is lucky enough to happen to have chosen a typical
laboratory in that the things that go on there are the same as or significantly
similar to the things that go on in all or most of the other scientific
laboratories, then generalizing upon the one, typical laboratory would
be useful. There would, of course, be no way of knowing whether the
laboratory was typical without ever having looked at any others.
The methodological maxim in this situation seems to be: do many microsociological
case studies of many laboratories. With data from many settings,
trends would be likely to show up, giving an idea of what is typical and
giving the sociologist something to generalize on more comfortably.
This approach might turn out to be helpful, but it would take
microsociological case studies of science getting a lot more popular for
it to actually happen. Even if it did, the simple notion of typicality
would prove problematic. Because case studies focus on the practice
of science, the physical actions of the people being observed are relevant.
These actions are, of course, notably different across scientific subfields.
Which is more “typical” of scientific activity: using a particle accelerator
or studying the behavior of butterflies in the wild? The anthropological
approach, with its emphasis on the practice of scientific inquiry, would
be likely to have trouble comparing across the special sciences.
On the other hand, formal rules may exhibit themselves when activities
across fields are compared. Either way, a large number of studies
is needed to give us some reference as to what is typical, however ‘typical’
is construed, or if it is even possible to identify. This is a weakness
of single studies which generalize upon their own data as Laboratory Life
does, but it is not necessarily a weakness of the anthropological approach
in general. If Latour and Woolgar had many other studies to fit their
results in with and generalize upon, their account might look more justified.
To their credit, a collection of many case studies has to start somewhere.
Although it is unclear when, in the course of the accumulation of case
studies, we can begin to generalize--and just how much of science we can
generalize to--we must surely have more than just one case study before
generalization is possible.
Ian Hacking has suggested that the subject of Laboratory Life,
the discovery of the structure of TRF, was especially easy to tell a constructivist
story about because the process involved synthesizing a substance and checking
if it acted like the natural substance. The identity of the chemical
structures of the synthetic and natural substances had to be inferred because
there was just not enough of the natural substance available for extensive
testing. The extraction of TRF from animals is difficult and expensive
and each animal yields only a minuscule amount. This made finding
out about natural TRF in a direct way impossible, hence the dependence
on similarity with synthesized substances. The fact that the work
they were studying seems to be more-constructed-looking than most casts
doubt on the justification of Latour and Woolgar’s generalizing from their
study because the idiosyncratic character of the specific work they studied
helped support their constructivist conclusion.
The problem with the notion of typicality sketched above has
a specific effect on social constructivist case studies because of a nasty
combination of the two problems that I have been discussing, ignorance
and generalization. The assumption of strangeness prevents an observer
from evaluating the evidential justification involved in the use of a complicated
instrument like a mass spectrometer. Without understanding the physics
that a mass spectrometer is based on, an observer will only see a complicated
machine that spits out graphs. The observer will have no reason to
think that the graphs have any relation to the substance being measured,
and might come to the conclusion that facts based on the results of the
graph are constructed. Then, because of the assumption that generalization
is unproblematic, the observer will come to the belief that all uses of
complicated instruments involve constructive belief formation. The
ignorant stance of “anthropological strangeness” and the assumption of
the possibility of generalization combine to make the observer think that,
because one instrument is a black box to her, all instruments are black
boxes.
A further problem with the generalization in our social constructivist
case studies is that it is to all of science. Latour and Woolgar
make no qualifications to their claims that science is a process of constructive
belief formation. This seems to indicate that it is not just the
use of complicated instruments that makes science constructive, but something
inherent to scientific activity. This is a problematic stance in
that different instances of scientific activity have different likelihoods
of looking like black boxes to observers. It is their appearance
as black boxes that makes instruments seem to be a source of constructed
beliefs to Latour and Woolgar. A mass spectrometer will certainly
be a black box to anyone not knowledgeable about physics. But
a geneticist counting the number of pea plants that descended from plants
with certain characteristics is not using a complicated measuring device,
she is simply counting. For Latour and Woolgar’s claim that all of
science is constructive, this simple interaction with observable objects
must be constructive as well. It seems to me that counting observables
like pea plants is an activity that can unproblematically be characterized
as evidential and descriptive. This means that Latour and Woolgar
need to back away from their claims about all of science being constructive
and retreat to a claim that the use of complicated instruments which are
amenable to being viewed as black boxes is constructive. However,
if the use of instruments can be seen as justified through being based
on theory which comes from experiments which are ultimately justified by
interaction with observables, then even this weakened position will be
problematic for Latour and Woolgar. The idea--and I will discuss
it in greater detail later--is that simple experiments, say, in physics,
with observables lead to simple theories and some technology based on that
theory. The technology allows for more and more complicated experiments
and technology until we get mass spectrometers. The use of complicated
instruments in forming beliefs is then justified through its lineage which
can be traced back to beliefs formed about observables. There is
a connection between interaction with observables and interaction with
unobservables that casts doubt on the possibility of using that distinction
as the dividing line between descriptive and constructed science.
The social/evidential distinction
The methods and conclusions of Laboratory Life are both greatly colored
by the authors’ explicit denial of the distinction between the technical,
intellectual aspect of science (which is traditionally thought to lead
to true beliefs) and the social aspect of science (which is traditionally
thought to lead to false beliefs). This is, of course, how Latour
and Woolgar want it. They think their denial of the social/evidential
distinction is a powerful tool and that it gives their account much of
its force. They devote their first chapter to explicitly defending
the denial and frequently refer to it throughout the book. They offer
four arguments for their denial of the social/evidential distinction.
Their first argument is that, if the practice of science can
be divided into social and evidential activity, the sociology of science
is limited to studying only the parts of science that get classified on
the social side. Latour and Woolgar think that sociology should not
be so limited. They say that accepting the distinction means that
“there is no point in doing sociology of science unless one can clearly
identify the presence of some politician breathing down the necks of working
scientists.” The authors are obviously implying that this is
wrong; there is some point to doing sociology of science even when social
factors cannot clearly be identified. On the face of it, it is simply
not clear why this should be the case. If sociology defines itself
as the study of social things, and no social things can be found in an
area, why should sociology lament its inability to study that area?
Of course, this idea fits in with the general character of Latour and Woolgar’s
account as revealing social factors at work where a traditional view does
not find them. It should be entirely unremarkable, however, that
an account of science which assumes at the outset that there is no viable
distinction between the social and evidential aspects of science finds
that all of science is affected by social factors.
Insofar as this argument is intended as a call for the in-depth
study of the practice of science--which may include some things not traditionally
considered social--it does hold some force. The putatively evidential
parts of science should not be considered entirely off-limits to sociology,
as it may take some sociological analysis to find out whether or not there
are non-evidential social forces at work. It should be noted, however,
that Lynch argues that it is knowledge of the special science under study,
not of sociology, that will help the observer distinguish the social from
the scientific. Studying both sides of the social/evidential
distinction may turn out to very useful to understanding scientific inquiry.
To do this kind of sociology of science does not, however, require a radical
break-down of the social/evidential distinction. We can notice the
influence of social factors on primarily evidential parts of science (almost
all evidential practices have a social dimension in that they were learned
from other people). It must be recognized, however that finding a
social dimension to epistemic processes does not immediately destroy their
evidential character. We can notice interaction between the two sides
of the social/evidential dichotomy without having to destroy it.
Latour and Woolgar’s second argument that the sociology of science
should not be limited to the social side of a social/evidential distinction
is that this would make the sociology of science a sociology of error.
They write, “emphasis on ‘social’ in contradistinction to ‘technical’ can
lead to the disproportionate selection of events for analysis which appear
to exemplify ‘mistaken’ or ‘wrong’ science.” This argument
comes directly out of Bloor’s arguments for his strong programme.
I will discuss in detail the connections between the strong programme and
the denial of the social/evidential distinction later. For now, let
me say that studying both good and bad science is a good idea, but to do
so does not require the destruction of the social/evidential dichotomy.
As I mentioned above, the evidential side may need to be investigated to
find out if there are hidden social aspects, but this can be done with
the distinction in place--indeed, without a notion of social as separate
from evidential, it is hard to imagine how a sociologist could find hidden
social aspects. Latour and Woolgar’s conviction that the destruction
of the social/evidential distinction gives sociology license to examine
all of science clearly shows that they are trying to break down the distinction
in favor of the social. They also take the resultant picture of science
as social to mean that it is not evidential, which is far from obvious.
The authors’ third argument, which seems to be an extension of
the first, is that the acceptance of the social/evidential distinction
has lead to an imbalance in the study of science--too much attention is
being paid to one side of the distinction. This argument is susceptible,
like the first, to the recognition of the existence of a real limit to
the scope of sociology. However, as I said in my comments about the
first argument, I do think that the desire to study all of science in depth
is a good one. I just don’t think that a radical break-down of the
social/evidential distinction is necessary to do that.
Latour and Woolgar’s fourth argument is that the social/evidential
distinction causes people studying science to ignore the parallels between
social and cognitive developments in science. They offer the example
of social processes like the “emergence of social leaders” happening at
the same time as intellectual processes like the shift from “defining a
position” to “doing studies.” The authors should not necessarily
be faulted for offering a less-than-illuminating example here because this
passage, which is riddled with citations, is clearly based on others’ discussions.
They argue that accepting a distinction leads to problems with recognizing
which way any possible causal relation might go between the social and
the evidential. As with Latour and Woolgar’s other arguments presented
in this section, this issue can be addressed without a radical breakdown
of the social/evidential distinction.
In addition to these conceptual arguments, Latour and Woolgar
also offer a methodological reason not to accept the social/evidential
distinction, which is based on the fact that it is used as a resource in
scientific practice and the idea that it is problematic to uncritically
accept a concept used by the culture under study. By this they mean
that scientists can cast their own activity or that of others as on one
side of the distinction or the other, thereby validating or calling into
question the results. For example, suppose two laboratory groups
are evaluating the health risks associated with smoking cigarettes.
One group finds that smoking does not endanger a person’s health.
The other group, which has obtained a contrary result, then points to the
massive funding of the first by tobacco companies in order to call their
result into question. The second group, assuming this convinces the
scientific community that their result is more believable than the first
group’s, has used the social/evidential distinction to its own scientific
advantage. The first group might come back, saying that the large
amount of money they got from tobacco companies allowed them to purchase
more precise instruments and better trained technicians than the second
group, thereby making the first group’s results look better. Then
they have taken an aspect of their situation and placed it on the evidential
side of the distinction, hoping to help their result gain credibility within
the scientific community.
Latour and Woolgar argue that the social/evidential distinction
and its use in the practice of science are phenomena to be explained.
This may be true, but why not use the traditional explanation for the social/evidential
distinction? The explanation for its use in scientific practice is
in the important and recognizable differences between the things that it
separates. Latour and Woolgar also maintain that their explanation
of scientific activity “should not depend in any significant way on the
uncritical use of the very concepts and terminology which feature as part
of that activity.” The authors mean this as a way of preventing
“going native” as was dealt with above in the discussion about the anthropological
approach and the assumption of strangeness. To accept uncritically
a concept used in a culture may harm the observer’s ability to explain
and evaluate the use of that concept and its importance to the culture.
This is also connected to Bloor’s impartiality principle, which I have
said is unproblematic to a credit-giving view. Avoiding an uncritical
acceptance of a distinction used in scientific activity is a good idea.
But a dogmatic denial of the distinction should also be avoided.
I can avoid taking the social/evidential distinction for granted
but notice in observing the practice of science that some activities are
primarily about interacting with people and some activities are primarily
aimed at interacting with the world and finding out about it. If
my observations do reveal activities easily categorized in this way, I
can then accept the social/evidential distinction myself and explain its
existence in scientific practice by arguing that the scientists notice
it as easily as I did. However, Latour and Woolgar’s appraisal of
the use of the social/evidential distinction in the practice of science
is flawed. They say that scientists use the distinction in
agonistic activity to make results look better or worse depending on which
side of the distinction they can be placed. This claim is based on
a consistent error in their account; they equate “social” with “non-evidential.”
This error is clear from the beginning, when Latour and Woolgar use the
example that the assertion that “X observed the first optical pulsar” can
be undermined by formulating the situation in this way: “X thought he had
seen the first optical pulsar, having stayed awake three nights in a row
and being in a state of extreme exhaustion.” The authors claim
that, in the second version, the logic of science has “been disrupted by
the intrusion of social factors.” But that is plainly wrong.
Lack of sleep is not a social factor, it is a physiological one.
What Latour and Woolgar are trying to point to is the fact that what went
wrong in the episode was non-evidential, which it was. But they do
this by saying that it was social, which it wasn’t. So any scientist
using the second formulation to cast doubt on X’s results is using an non-evidential/evidential
distinction to do so, not a social/evidential one. Latour and Woolgar
are wrong to assert that scientific practice depends on the social/evidential
distinction and so their methodological reason for denying the distinction
disappears. I think Latour and Woolgar fall into this trap because
of the way the traditional view of science seems to equate social with
non-evidential as well. But a more sophisticated view which gives
science credit in a way faithful to the spirit of the traditional view
can be formulated that does not use the social/evidential distinction in
a problematic way. Recognizing that “social” is not equivalent to
“non-evidential” is the first step to formulating such a view. I
will pursue the issue further later.
Knorr-Cetina also denies the social/evidential distinction at
the outset of her study. Her defense of this denial overlaps with
Latour and Woolgar’s quite a bit. She offers four reasons for challenging
the distinction. The first is that we must recognize that “scientific
or cognitive strategies are also political strategies.” The
idea is that scientists choose strategies about methods and places of publication
to maximize their “scientific profits,” which are things like credibility
within the scientific community and can be used to negotiate for, say,
more grants. This picture is formed by applying an economic analysis
to scientific practices, but does not, by itself, indicate anything about
the epistemic character of science. Economic models are often used
to describe the workings of organisms or ecosystems--we may say that an
animal will avoid using resources in a certain activity because it is saving
them for another, more profitable one--but this does not indicate that
the behavior being described is not directed toward or more or less successful
in regard to survival. Similarly, applying an economic model to scientific
activity does not indicate that science is solely a political or economic
activity. It does not prevent a model in which the best way to acquire
scientific profits is through legitimate scientific merit--that is, being
good at finding out about the structure of the world. Surely political
forces are at work in every human activity, but things still get done.
There are political forces at work in the music industry, but that doesn’t
prevent good music from being made, just as political forces at work in
scientific inquiry do not prevent science from describing the world.
Either way, the recognition that there may be a political or economic dimension
to what would traditionally be thought of as evidential activity does not
necessitate the destruction of the social/evidential distinction; the interaction
of the two can be studied with the dichotomy in place.
Knorr-Cetina’s second argument is a kind of combination of Bloor’s
desire to avoid a sociology of scientific error and Latour and Woolgar’s
claim about the distinction’s use as a resource in scientific practice.
I agree that we should avoid a sociology of error in the study of science,
but I don’t think that this necessitates a breakdown of the social/evidential
dichotomy. Just where I place myself on the issue of agreeing with
the strong programme but not the related dichotomy destruction has to do
with my discussion of the symmetry principle above and will be discussed
in detail below.
Third, Knorr-Cetina points out the problem of separating the
social and evidential factors in a situation “such as the policy field
where many areas have been ‘scientized’ (verwissenschaftlicht) by the hegemony
of science. Before the mutual influence of social and cognitive variables
can be determined, they must first be conceived of and measured independently.”
The “scientization” of a field is not necessarily a problem to a credit-giving
view of science, of course. If science is good at finding out about
the world, its influence in other areas might be beneficial. A question
of circularity in assuming science is not descriptive in order to prove
that science is not descriptive seems to be looming here. I am willing
to give social constructivists the benefit of the doubt in that they cannot
be faulted for trying to avoid an unquestioned assumption about the value
of science in their investigation of that value; but they always seem to
swing so far the other way toward an unquestioned assumption of the non-descriptive
nature of science. Regardless, it looks to me as if the social/evidential
distinction would help sort out when science’s hegemony is justified and
when it is not. In fact, the last sentence of the above quote from
Knorr-Cetina seems to explicitly invoke the distinction as a way of addressing
the issue. How are social and cognitive (evidential) factors to be
“conceived of and measured independently” without an appeal to the social/evidential
distinction? Knorr-Cetina’s final argument is based on Latour and
Woolgar’s arguments to the effect that the distinction limits the scope
of the sociology of science. As such, it is susceptible to the same
criticism. If sociology is limited in such a way, then so be it.
It is worth mentioning again that Lynch does not invoke a breakdown
of the social/evidential distinction. He argues that familiarity
with the science under study eliminates the desire to destroy the distinction
by making it clear which parts of science go on either side of the dichotomy.
This casts doubt on the validity of Knorr-Cetina’s contention that the
close observation of specifically cognitive aspects of science entails
the abandonment of the distinction.
The social/evidential distinction and the strong programme
Microsociological case studies of science and the strong programme
in macrosociology of science obviously share some of the same commitments.
They are both concerned with revealing the socialness (or the impossibility
of discriminating the social from the rest) of science. They both
endorse using the methods of the social sciences to study science, an activity
traditionally thought not to need such an analysis because of its superior
epistemic status. The strong programme’s commitment to recognizing
the similarities between the generation of true beliefs and the generation
of false beliefs is undoubtedly a conceptual source for the microsociological
constructivist’s contention that the distinction between the social and
evidential aspects of science is problematic. Bloor argues that we
must look at true and false beliefs in the same way. This indicates
that there is not as much of a difference as we have traditionally thought
between true and false beliefs. Aspects of their causation, specifically,
are not as different as we have traditionally thought. This leads
to the idea that the parts of science that are usually thought to lead
to either true or false beliefs might not be as different from each other
as previously thought. If, in line with Bloor’s picture of the traditional
“sociology of error,” the things that lead to true and false beliefs are
evidential and social processes, respectively, then the idea that
these are not as different as traditionally thought leads to the denial
of the social/evidential dichotomy.
Both Latour and Woolgar’s and Knorr-Cetina’s defenses of the
destruction of the social/evidential distinction offer quotes from Bloor
as support. Latour and Woolgar say they are particularly interested
in the impartiality requirement in that it lends support to their idea
that the observer should not adopt a position as to the truth or falsity
of the beliefs being studied. I have dealt with the problems of the
assumption of strangeness above. I now want to move on to the relationship
between Bloor’s strong programme, which I have said a credit-giving view
of science can mostly endorse, and the denial of the social/evidential
distinction, which I have indicated I, as a holder of a credit-giving view,
am not sympathetic to. A credit-giving view of science can only mostly
support Bloor’s strong programme because he insists on a narrow interpretation
of the symmetry principle, as I argued above. A strong programme
with a broad version of the symmetry principle is unproblematic for a credit-giving
view. The symmetry principle will factor largely in the following
discussion, as I focus on a narrow version of it as the source of the strong
programme’s problems, and assume the possibility of a credit-giving view
accepting the other three requirements on the basis of my above discussion
of the strong programme. I have also argued that the reasons Latour
and Woolgar and Knorr-Cetina give do not justify the destruction of the
distinction, and that a more traditional view of the distinction may not
be as problematic as they suggest. My position on the dichotomy will
become more clear in the following discussion.
It will be instructive to start with an examination of the traditional
view of science in the terms used by both the macro- and the microsociological
constructivists. We have two basic variables: the position on the
symmetrical explanation of belief-formation and the position on the status
of the social/evidential dichotomy. As I mentioned, the symmetry
principle is the focus of my discussion of the strong programme, and acceptance
of it or different versions of it will place a view of science in different
positions. Similarly, the acceptance or denial (or different degrees
of acceptance and denial) of the social/evidential distinction will locate
a view of science in different positions. In what follows, I will
explore some of the combinations of these two variables.
On the traditional view of science, true beliefs are caused solely
by truth, and false beliefs are caused by (non-truth) causes, like social
processes. There is no symmetry in belief-causation, and the social/evidential
dichotomy is in place. Social constructivists think this view of
science is problematic, and I agree. It is far too simple to say
that true beliefs are caused solely by their truth and any other cause
leads to false beliefs. Truth is not, per se, a cause of belief.
Human beings do not interact with truth. Humans interact with things
like tables and chairs. Of course, the state of the world in regard
to those tables and chairs can affect belief-formation. So if I walk
into a room with a table and four chairs and I see the tables and chairs,
I can form a true belief that one table and four chairs are present in
the room. The state of affairs in the world affected the process
of belief-formation through the fact that light bounced off the tables
and chairs and hit my eyes. My sense perception of the tables and
chairs is what caused my belief, but the character of that belief was determined
to some extent by the state of affairs in the world, which is what allowed
me to form a true belief. To say that truth alone caused my belief
would be to tell too simple a story, and to ignore the ways in which this
process can go wrong. That is, the direct cause of my belief is sense
perception. But I could form another belief through similar sense
perception that is wrong. Let’s say I walk into a room with half
a table, two chairs, and a cleverly placed mirror. My false belief
that one table and four chairs are present in the room is then formed by
sense perception. Saying that the former belief is caused by truth
while the latter by a mirror ignores the similarities between the two situations.
This is one of the significant insights behind the strong programme.
A view which occupies strongly social constructivist positions
on the two issues would include an undifferentiated set of causes which
yields both true and false beliefs. This view has narrow symmetry
in that it indicates that true and false beliefs can be brought about by
the same specific kinds of causes. This narrow symmetry entails the
broader symmetry that I described sympathetically in my section on Bloor
above, where larger classes of belief causes are thought to be able lead
to both true and false beliefs (e.g., sense perception). If the same
specific kind of cause can lead to both, then obviously any larger class
that the cause belongs to will be able to yield both kinds of belief.
Causes of belief are not different enough to warrant separation into social
and evidential categories on this view, so the social/evidential distinction
is not in place. There is just a pool of similar causes which sometimes
bring about true beliefs and sometimes bring about false beliefs.
Glaringly absent from this story is any indication as to how
some beliefs acquire status as true while others do not. Social constructivism’s
answer to this question is where it gets its name: “true” beliefs are produced
through a process of fact-construction which is shaped--or completely determined--by
social processes. These social processes are in the pool of causes
along with sense perception and anything else that can cause a belief.
However, this still does not answer the question as to why beliefs can
be differentiated when their causes cannot. Bloor’s answer is that
beliefs are separated according to pragmatic concerns. Latour and
Woolgar offer a story about the solidification of a statement into a fact
through agonistic processes. What their answers have in common is
that truth-status seems to be acquired after belief-formation, in a contestation
period, rather than through the circumstances of their generation, or through
reference to facts in the world. According to this picture, the causes
leading up to a belief do not determine its truth value. As I mentioned
before, with narrow symmetry there is no systematic way of generating true
beliefs.
Narrow symmetry also leads to a problem with observable entities.
Presumably we have been talking about beliefs about things like electrons
so far, and the story that beliefs about unobservables are largely or entirely
formed in accord with social concerns may sound plausible. But what
about beliefs about things like tables and chairs? I form a belief
about the presence of a chair through seeing it. Sight is the main
(if not only) cause of my belief, and the belief is accurate in that there
is a chair off of which light is bouncing and hitting my eyes. The
justification of my belief comes from existence of the chair and my interaction
with it. The situation with observable objects does not fit into
the social constructivist story very well because it is so easy to see
how reality can be a determining factor in the character of human belief.
Bloor deals with this by admitting we can be right about observables but
denying that the truth or falsehood of scientific theories can be determined
by empirical investigation. For Bloor, there must be some kind
of cut-off point between observables and unobservables. However,
if an account can be formulated which builds up complex scientific theories
from unproblematic observation of ordinary objects which includes a continuum
of degrees of observability rather than a sharp distinction, then Bloor’s
constructivism may turn out to be incoherent. Latour and Woolgar
do not address the differences between observables and unobservables.
This, along with the extreme nature of all of their claims, leads me to
believe that they might actually endorse a social constructivist view of
observables. To argue against this directly would mean fighting the
age-old battle against global, epistemological skepticism, an endeavor
which will not fit in the scope of the present project. Let me simply
say that causal interaction of humans through their sense organs with objects
in the world is a good way of forming true beliefs about those objects
because it allows the state of reality to be a determining factor of the
belief.
I have argued against the narrow construal of the symmetry principle
on which social constructivists depend. I have also argued against
the social constructivists’ reasons for destroying the social/evidential
distinction. Let us examine a position which takes these criticisms
into account but is also sympathetic to the social constructivists’ criticisms
of the traditional view. I have said that, although the narrow symmetry
principle is problematic, a broad symmetry principle is helpful to the
study of science. Our next view of science endorses broad symmetry.
It also endorses the social/evidential dichotomy, in light of the apparent
weakness of the social constructivists’ arguments for its destruction.
This view recognizes that both true and false beliefs are caused, although
true beliefs are caused by evidential causes, and false beliefs are caused
by social causes. This view allows for causes of the same general
type--like sense perception--to lead to both true and false beliefs.
To use my example of tables and chairs: in the room with one table and
four chairs, my true belief is caused by an evidential process of seeing
tables and chairs. In the room with half a table, two chairs, and
a mirror, my false belief that there are four chairs and one table is caused
by seeing those objects and their reflections, and the social factor of
someone’s mischievous placement of the mirror. So the belief-forming
process in the first room is characterized as evidential, while the process
in the second room is characterized as social.
This account ought to sound suspicious. Both beliefs are
formed through sense perception in the same way; it doesn’t make sense
to place them on separate sides of the social/evidential dichotomy.
It’s not really anything social about the second room that causes my false
belief, it’s the behavior of light and reflective surfaces. What
is different about the two situations is not so much in the nature of the
belief-forming processes being used as in the fact that one of the situations
is a little harder to figure out. On the other hand, if I were more
careful in forming my belief in the second room, I would have noticed the
line where the mirror and table meet, or I would have seen my own reflection
in the mirror or something else that would ruin the illusion. So
my carelessness, an arguably social factor, did contribute to my formation
of a false belief. In another version of the table-and-chairs-with-mirror
story, let’s say that the whole thing happened in a carnival funhouse.
In this case I expect the presence of trick mirrors, and so I quickly form
the correct belief that there is only half a table and four chairs.
In this case an arguably social factor facilitated my true belief.
What I hope this example illustrates is that the difference between
good and bad belief forming processes (those which tend to lead to true
and false beliefs, respectively) does not line up exactly with the distinction
between social and evidential processes. Processes without a social
dimension can lead to false beliefs. We can be tricked by reality
in certain ways. Priestley’s experiments which seemed to support
the phlogiston theory did not do so because of the influence of social
factors; they did so because of incomplete observation of the relevant
evidence. At the same time, social processes do not necessarily prevent
true beliefs. There are many aspects of modern science that are most
appropriately categorized as social but which facilitate the formation
of true beliefs. Modern laboratories could not function without specialization
of labor, for instance. No one scientist could do everything it takes
to execute an experiment in high energy physics. Assigning different
people different tasks at a particle accelerator is necessary to the formation
of any true beliefs that result from experiments at the facility.
In addition to this problem of the line-up between bad/good and
social/evidential belief-forming processes, I think there is a basic problem
with the last view described in that the values of the two variables are
in conflict with one another. The presence of symmetry in belief
explanation is in tension with the unquestioned separation of the social
and evidential--inasmuch as that separation is meant to shed light on the
issue of which belief-forming processes yield true beliefs and which yield
false. The idea behind broad symmetry is that any one general category
of belief-formation can yield either true or false beliefs, depending on
more specific aspects of the application. Of course, the social/evidential
distinction itself consists of a pair of general categories. So the
broad symmetry principle indicates that such categories can lead to either
true or false beliefs. This should only pick out the social category,
however, as it is part of the requirement for membership in the evidential
category that a process lead to true beliefs. This makes sense, I
think, in light of the comments above about the lack of exact correspondence
between the social/evidential and the bad/good belief-forming process distinctions.
We want to keep broad symmetry but solve the conflict between
it and the social/evidential distinction. My answer is a weakened
social/evidential distinction. Destroying the distinction is unnecessary
and unhelpful. There simply is a difference between social and evidential
factors in belief-production and scientific activity. The distinction
is not, however, the answer to the question of which kinds of belief-formation
to endorse as leading to truth, or as good science. This is the useful
insight upon which social constructivism is based. But thinkers like
Bloor, Knorr-Cetina, and Latour and Woolgar are too extreme in the conclusions
they draw from this insight. The move from a less-than-exact fit
between the social/evidential dichotomy and bad/good science distinction
to the claim that the former distinction does not exist or that all of
science is constructed is not justified. The unfortunate result of Latour
and Woolgar’s equation of social and non-evidential is that, when they
attempt to destroy the dichotomy in favor of the social in response to
the inexact fit of the social/evidential and bad/good distinctions in science,
they think that they have shown that all of science is non-evidential.
Both steps in this reasoning are wrong. We can recognize the poor
line-up but keep both distinctions.
In line with these points, I offer a view of science which includes
broad symmetry, narrow asymmetry, and an intact, but nonexplanatory social/evidential
distinction. That is, including a belief-forming process on one or
the other side of the social/evidential distinction is not necessarily
taken to indicate whether it is good or bad--except that everything on
the evidential side should lead to good science. On this view, we
have broadly symmetrical explanation of beliefs, and social and evidential
processes are differentiated but the distinction is not the deciding factor
in--nor does it necessarily correspond to --whether the cause leads to
a true or a false belief.
Instead of using the social/evidential distinction to separate
true and false, I suggest that good cause for a belief is one which tends
to lead to true beliefs, and a bad cause is one which tends not to lead
to true beliefs. Good causes of beliefs are processes like careful
sense perception and the careful and correct use of scientific instruments.
I may have some explaining to do in mentioning things like “sense perception”
as good causes after arguing that sense perception--and any general category
of causes of belief--can lead to either true or false beliefs. The
key is the qualifying phrase I have usually tacked on to the end of that
claim, “. . . depending on the specific circumstances of the application
of the process.” Belief explanation in specific situations should
be asymmetrical. That is, once you get to a specific level, aspects
of the belief’s causation do determine its truth value. In the chair-and-table-with-mirror
example, for instance, sense perception is the general cause of both the
true and false beliefs. Looking closely at the two situations, however,
we find that the false belief is easily explained and corrected by taking
into account the behavior of mirrors and examining the room more carefully
than at first. I will articulate and defend my claim that we can
separate belief-forming processes into good and bad ones and the way in
which this allows us to give credit to science in Part II.