The Intentional PerspectiveA Story by Hapless TikiAn essay on Folk Psychology, and Daniel Dennett's Intentional Stance model in light of Ronald Giere's Scientific Perspectivalism.“…if it isn’t literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for my saying…if none of that is literally true, then practically everything I believe about anything is false and it’s the end of the world.” (Fodor, 1990)
“…our commonsense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced…by completed neuroscience.” (Churchland, 1981)
Two vivid statements encapsulate extremes on a continuum which has preoccupied philosophers of mind for the better part of a century. Many contemporary philosophers feel a tension so acute between two fundamental sectors of their intellect that otherwise sober responsible authors find themselves prone to attacks of alternately apocalyptic and transcendental rhetoric. In the one hand we have what appears to be a powerful, progressive, consistent, elegant, and fruitful worldview accepted by most contemporary philosophers referred to as metaphysical or ontological naturalism: “reality is exhausted by nature…and the scientific method should be used to investigate all areas of reality.” (SEP: “naturalism”, 2009) And in the other we find that paragon of ubiquity, idiomatically entitled “folk psychology”: that language game which includes, chiefly, the propositional attitudes belief and desire, “a framework of concepts roughly adequate to the demands of everyday life, with which the humble adept comprehends, explains, predicts and manipulates [social interactions].” (Churchland, 1989) Many philosophers find it as difficult to deny the justification and efficacy of the naturalistic position and its scientific methods as to conceive of a situation where the whole of our belief-desire psychology must be abandoned. Hence the profligate spillage of sweat, ink, and tears upon the project of reconciling folk psychology and science patented “naturalizing the intentional”. Fodor and the strong intentional realists are looking for
“…at a minimum, the framing of naturalistic conditions for representation…something of the form ‘R represents S’ is true iff C where the vocabulary in which conditions C is couched contains neither intentional nor semantic expressions.” (Fodor, 1990)
If this requirement could be met, it seems we would indeed have integrated intentionality into the scientific order. However, (as Fodor would admit) no such account is yet within our grasp despite the honest toil of many titanic intellects. It is beginning to look more and more as though, if satisfaction of this sort of formula is necessary to successfully naturalize the intentional, Churchland and the eliminativists will win the day and it will be the end of the realists’ world. Nevertheless, it is not the aim
of this paper to rehash the debate between the realists and the eliminativists,
but rather to provide a different sort of solution, or perhaps better yet: a
deflation of the problem. I would like
to argue that, given the views of philosophy of mind and science that we ought
to hold anyway (for other reasons, beyond the scope of this essay), there need
be no tension between naturalism and folk psychology. Herein I will present a case that a
“perspectival” account of the philosophy of science as expounded by Ronald
Giere, and an instrumentalistic account of intentionality as presented by
Daniel Dennett’s “intentional stance” can be united in such as way as to provide
a position which is both thoroughly naturalistic and maintains a robust
intentional vocabulary, capable of both as successful predictions and as satisfying
explanations as found in non-contentious cases of science. I will
follow tradition and discuss primarily “belief” as the paradigm case of an
intentional object. For Dennett, there are no, nor could there be, any genuine “semantic engines” in the world. The closest we have found is the human brain, which appears to “discover what its multifarious inputs mean, to discriminate their significance and act accordingly.” However, physiology and cognitive science have shown us that the brain is “just a syntactic engine; all it can do is discriminate its inputs by their structural, temporal, and physical features.” Dennett is a strong proponent of evolutionary explanations, and often reminds us that the brain is an artifact, designed by natural selection to “figure out what to do next”. Dennett gives us a way out of this apparent disconnect between bio-neural teleology and physiological performance-capacity by claiming that the brain “could be designed to approximate the impossible task…by capitalizing on close fortuitous correspondences between structural regularities and semantic types.” (Dennett, 1981) This move, away from Fodor and Searle’s cry for scientific discovery of genuine semantic engines in nature, does not go so far as to eliminate all talk of semantics from science but seeks to carve out a “mild realist” niche from which we may capitalize on what, from our perspective, are “real patterns” in the behavior of a certain subset of complex adaptive systems in our environments, without forcing us into any specious ontological commitments. In simple and direct language, the intentional stance is:
“the strategy of treating [an] object whose behavior you want to predict as a rational agent with beliefs and desires and other mental stages exhibiting intentionality…Any system whose behavior is well predicted by this strategy is, in the fullest sense of the word, a believer.” (Dennett, 1981)
Dennett makes much of the indispensability of the intentional stance. His favorite example is that of playing chess against a computer running a sophisticated program; an object that everyone agrees is (merely) a syntactic engine. He argues that the only way to win the game of chess is to treat the computer as an intelligent agent with beliefs and desires in a full and real sense. To assume that it “knows” the rules of chess, and the position of the pieces, it “desires” to win the game, etc. and act accordingly. Though, in theory, one could take a “physical stance” and examine every aspect of the computer at the finest grained description of which our current available science is capable of providing and use this information to predict the precise position of the king’s rook on the 12th turn, due to technological limitations, combinatorial explosion and other unfortunate epistemic limitations, this sort of calculation Is impossible. Instead, one could attempt to move to a “design stance” wherein one would attempt to discover the tendencies of the particular chess program, reverse engineering the software to such an extent that predicting its tendencies would be second nature. But of course this would not work at all for a single game, and with chess programs advanced as they were even in the late 90’s, it too would be an impossible task. From a practical standpoint, if your goal is to win a particular one-off game of chess, one can only treat the computer as an agent with beliefs and desires: an intentional system. The instrumental conception of intentionality begins to come into focus: What it is to be a believer is to be an intentional system, what it is to be an intentional system is to be reliably predicted by the attribution of intentional states to that system, and the decision to adopt the intentional stance is a pragmatic choice, which is, in many situations, the only practical option. We can be comfortable with the entailment that there may be no fact of the matter as to what some system “really does believe”, for under this conception, a system does not have a belief in virtue of its possessing some corporeal object. Rather, the belief is an abstract object, an “instrument” if you will. The instrumentalist claims that “beliefs” do not need to exist, in the sense of being instantiated in the physical world, for even in a world so lacking; what do exist are certain dynamical systems which manifest complex behavioral patterns which really are successfully predicted by attributing intentionality to them.
“I suggest that folk psychology might best be viewed as a rationalistic calculus of interpretation and prediction"an idealizing abstract, instrumentalistic interpretation method that has evolved because it works and works because we have evolved. “ (Dennett, 1981b)
So what, then, is this idealized abstract calculus within which reside our treasured intentional objects? If we are to put all our beliefs in one basket, we will need some assurances. This is where Giere’s program of “scientific perspectivalism” comes in. We begin by acknowledging that scientific claims are linguistic, and that language is a cultural artifact. Giere therefore urges us, “our focus should be on representation…” and summarizes the scientific use of language with the following formula:
(I) S uses X to represent W for purposes P.
S can be an individual scientist, a specific scientific group, or even a community. W is meant to be an “aspect of the real world”. But the pressing question then becomes, of course, “What are the values of the variable X?” (Giere, 2004) Whatever further criteria we apply as constraints on X, X will be linguistic. This much is clear, but the interesting question is, what sort of linguistic representation ought it be? The conclusion of many philosophers of science is that, when applying (I) to scientific vocabularies, what takes the place of X is “theories” or “models”, within which terms of art are combined with an ontology and a set of theoretical axioms to construct some “lawlike empirical generalization”. For example: Newton’s gravitational model is often interpreted to include a linguistic object such as “All bodies attract…” Maintaining our concentration on the language we might then wonder, “To what does this term ‘body’ refer?” The standard picture has it referring to real objects in the real world. Under this interpretation, laws make direct empirical claims about the world, and the job of empirical science is to go forth, find “bodies” and test them for “attractiveness” (so to speak). But here Giere brings our attention to the common occurrence, exemplified by Newton, that the scientific vocabulary can’t be making claims about the world, because the entities invoked in the ontology of the theory are admitted by its own adherents to not really exist. In the Newtonian case, “bodies” means “mass points” yet within Newtonian physics “No real object can be a mass at a point”. So to what does the vocabulary of our scientific models refer, if not to objects in the world? Giere’s answer is “idealized abstract objects”. This view of representation has been nicknamed by Paul Teller the “Giere Triangle”, according to which:
“…language connects not directly with the world, but rather with a model, whose characteristics may be precisely defined. The connection with the world is then by way of similarity between a model and designated parts of the world.” (Giere, 2000)
If we read Newtonian theory as “defining abstract objects” rather than “describing real objects”, the job of empirical science is to “test the fit” of this particular model to the world (Giere, 2000). Science, on this picture is an extremely pragmatic endeavor, a sort of “social conversation” (Rorty, 1981) where the ultimate judge of an apt fit is determined by the available technology, accepted values, and current data-set of the scientific community. Testing this “representational” understanding of models to a clear case, Giere asks us to apply it to a simple applied mathematical model of,
“an auto moving away in a straight line from an intersection at velocity v having started at time zero a distance away: d(t) = vt + d0. One may say that, in the model, this equation is true. What one cannot say is that the equation is true of the position of a real auto…The question, as always, is how similar the real situation is to the model…” (Giere, 2000)
Returning to (I), we can now complete our interpretation of X as: a representational model which defines an idealized, abstract object which can be empirically tested for fit with some designated object in the world. Having presented both prongs of my proposed perspective, witness how smoothly they conjugate to provide my proposal for a version of naturalized intentionality. We treat the intentional stance as a representational model. Again filling in the variables of (I) we would have something like this:
(II) Folk Psychologists use the intentional stance to represent sufficiently complex systems in their environments for the purposes of predicting and explaining the behavior of said systems.
The linguistic entities created by folk psychology, for example, are then construed not as making a truth functional claim about existent entities in the world, but rather as defining a set of abstract objects including: a rational agent, that agent’s beliefs and desires, behavior caused by the conjunction of these intentional objects, etc. It is then the task of a pragmatic empiricism to see how well we think this model fits the natural world. It is the province of the social conversation of science to determine the axioms of this theory, though Dennett does provide us some candidates such as: (1) Always attribute the beliefs a system ought to have, and (2) Begin by attributing perfect rationality and then revise downward as further evidence comes in. For example, we see a woman in an expensive dress standing outside at a bus stop as it begins to rain. Our task is to predict her behavior. Clearly neither physics nor any other special science will be of any use to us in this position, but the intentional perspective is tailor-made. We note that she exhibits a facial expression and body language which we have, through prior observation of the species, previously correlated with displeasure which leads us to attribute to her a dislike of her present circumstances which we find understandable as it is both physically uncomfortable and potentially detrimental to the integrity of her garment to get wet. We note that she glances at the sky and her immediate surroundings, and attribute to her many beliefs, selecting some as relevant such as “It is now-here raining” and “Transit shelter there”. We attribute to her the ability to reason “Current relocation under shelter=rain-protection”. Based upon this and other “obvious” determinations of our intentional systems model, we predict that she will move into the shelter. The question of whether any scientist, under even the best possible circumstances, even in this simple case could by any means whatever discover something about the natural world such that it satisfied a Fodor-style requirement is an interesting and valuable question. However, in lieu of such a satisfaction and in the face of powerful philosophical arguments that one may not be forthcoming (Stich, 1985) we still have a model which fits what we observe to a striking degree of exactitude: the intentional stance. So, to the charge of having constructed a “behavoristic calculus” we plead an unabashed guilty. A behavoristic calculus it is, but one could argue, what more is any other science? Cannot physics be described, among other things, as the systematic, empirical study of the behavior of subatomic particles? It has its methods, its tools, its models through which advocates perceive the results of their investigations and construct their theories. It has a particular ontology consisting of quarks and leptons and a revolving door of other playful monikers. Physics is a perspective that we can choose to adopt to address a certain subset of questions it is well equipped to answer. The intentional stance parallels this perfectly, simply with the substitution of a shifting subset of complex adaptive systems as the domain of study. Chief among the ontological posits of this perspective are those items with which philosophers of mind have been so concerned: agents, beliefs, desires, etc. which turn out to be defined as that which fill the proper roles in our currently accepted model of intentional systems, nothing more, nothing less. If cognitive science happens to find a neural correlate which satisfies a substantial enough portion of the requirements so as to convince us that we ought to make some sort of quasi-reductive convergence of intentional stance theory with some more primitive branch of “hard” science, so much the better. We are already beginning to benefit from this sort of consilience in the biology-chemistry-physics unification. However, even should such a convergence not be in the offing, this is not to say that intentional systems theory is somehow lacking, or unscientific. If, from the perspective of another science, the ontology of intentional systems theory turns out to be “non-existent”, we say that’s fine! Nevertheless, the predictions gleaned from the application of, and the epistemic comfort provided by the explanations in terms of intentional language will still be successful. And in this case, we have all that we need for a naturalized intentionality. © 2011 Hapless TikiAuthor's Note
|
Stats
214 Views
Added on October 11, 2011 Last Updated on October 11, 2011 AuthorHapless TikiPortland, ORAboutFor over 15 years I've thought of myself as 'a writer', but in those years I've produced approximately squat (in more ways than one). It's time for a little less aspiration and a little more perspira.. more..Writing
|