Linguistic Philosophy: The Central Story
Subsequent arguments for LOTH are inferences to the best explanation. They appeal to supposed features of human cognition such as productivity, systematicity, and inferential coherence , arguing that these features are best explained if LOTH is true. Important objections to LOTH have come from those who believe that the mind is best modeled by connectionist networks , and by those who believe that at least some mental representation takes place in other formats, such as maps and images.
Full text issues
This article has three main sections. The second describes the major arguments in favor of LOTH. The third describes some important problems for LOTH and objections to it. LOTH is the claim that mental representation has a linguistic structure. A representational system has a linguistic structure if it employs both a combinatorial syntax and a compositional semantics see Fodor and Pylyshyn for this account of linguistic structuring.
Formal languages are good examples of languages possessing both combinatorial syntax and compositional semantics. In short, sentential logic employs both atomic and compound representations, and the components of its compound representations are themselves either atomic or compound. Thus, it possesses a combinatorial syntax. Moreover, the semantic content of a representation within sentential logic generally taken to be a truth-value— either TRUE or FALSE is a function of the content of the syntactic constituents, together with overall structure and arrangement of the representation.
Therefore it also possesses a compositional semantics. LOTH amounts to the idea that mental representation has both a combinatorial syntax and a compositional semantics. A common way of casting it is as the claim that thoughts are literally sentences in the head. This way of explaining the thesis can be both helpful and misleading. First, it is important to note that sentences can be implemented in a multitude of different kinds of media, and they can be written in a natural language or encoded in some symbolic language.
For example, they may be written on paper, etched in stone, or encoded in the various positions of a series of electrical switches. They may be written in English, French, first-order logic, or Morse code. LOTH claims that at a high level of abstraction, the brain can be accurately described as encoding the sentences of a formal language.
Second, it is equally important to note that the symbolic language LOTH posits is not equivalent to any particular spoken language but is the common linguistic structure in all human thought. Third, the posited language is not appropriately thought of as being introspectively accessible to a thinking subject.
However, that they are not introspectively accessible is not to be taken to indicate that they are not causally efficacious in the production of behavior. On the contrary, they must be, if the theory is to explain the production of rational behavior. Casting LOTH as the idea of sentences in the head can be useful, if understood appropriately: as sentences of a species-wide formal language, encoded in the operations of the brain, which are not accessible to the thinker.
Representational systems with combinatorial syntax and compositional semantics are incredibly important, as they allow for processes to be defined over the syntax of the system of representations that will nevertheless respect constraints on the semantics of those representations. Nevertheless, the rules respect the following semantic constraint: given true premises, correct application of them will result only in true conclusions.
Processes defined over the syntax of representations, moreover, can be implemented in physical systems as causal processes. Hence, representational systems possessing both combinatorial syntax and compositional semantics allow for the construction of physical systems that behave in ways that respect the semantic constraints of the implemented representational system.
Modern digital computers are just such machines: they employ linguistically structured representations and processes defined over the syntax of those representations, implemented as causal processes. Since LOTH is the claim that mental representation has both combinatorial syntax and compositional semantics, it allows for the further claim that mental processes are causal processes defined over the syntax of mental representations, in ways that respect semantic constraints on those representations Fodor , Fodor and Pylyshyn This further claim is the causal-syntactic theory of mental processes CSMP.
LOTH and CSMP together assert that the brain, like a digital computer, processes linguistically structured representations in ways that are sensitive to the syntax of those representations. Indeed, the advent of the digital computer inspired CTM. This will be further discussed below. RTM is the thesis that commonsense mental states, the propositional attitudes such as believing, desiring, hoping, wishing, and fearing are relations between a subject and a mental representation.
According to RTM, a propositional attitude inherits its content from the content of the representation to which the thinker is related. For example, Angie believes that David stole a candy bar if and only if there is a belief relation between Angie and a mental representation, the content of which is David stole a candy bar.
Thus, R1 is a schema. For example, the case of belief is as follows:. RTM is a species of intentional realism —the view that propositional attitudes are real states of organisms, and in particular that a mature psychology will make reference to such states in the explanation of behavior. For debate on this issue see for example Churchland , Stich , Dennett One important virtue of RTM is that it provides an account of the difference between the truth and falsehood of a propositional attitude in particular, of a belief.
On that account, the truth or falsehood of a belief is inherited from the truth or falsehood of the representation involved. If the relationship of belief holds between Angie and a representation with the content David stole a candy bar , yet David did not steal a candy bar, then Angie has a false belief. RTM, LOTH, and CSMP was inspired on one hand by the development of modern logic, and in particular by the formalization of logical inference that is, the development of rules of inference that are sensitive to syntax but that respect semantic constraints.
These two developments led to the creation of the modern digital computer, and Turing argued that if the conversational behavior via teletype of such a machine was indistinguishable from that of a human being, then that machine would be a thinking machine. It is the idea that the mind is a computer, and that thinking is a computational process.
- Linguistic Philosophy Central Story by Garth Hallett - AbeBooks.
- 9 Types Of Intelligence – Infographic.
- Youth Leadership in Sport and Physical Education;
- Full text issues.
The importance of CTM is twofold. First, the idea that thinking is a computational process involving linguistically structured representations is of fundamental importance to cognitive science.
It is among the origins of work in artificial intelligence, and though there has since been much debate about whether the digital computer is the best model for the brain see below many researchers still presume linguistic representation to be a central component of thought. Second, CTM offers an account of how a physical object in particular, the brain can produce rational thought and behavior. The answer is that it can do so by implementing rational processes as causal processes. This answer provides a response to what some philosophers—most famously Descartes , have believed: that explaining human rationality demands positing a form of existence beyond the physical.
It therefore stands as a major development in the philosophy of mind. Explaining rationality in purely physical terms is one task for a naturalized theory of mind.
Still, CTM lends itself to a physicalist account of intentionality. There are two general strategies here. Internalist accounts explain meaning without making mention of any objects or features external to the subject. For example, conceptual role theories see for instance Loar explain the meaning of a mental representation in terms of the relations it bears to other representations in the system. Externalist accounts explicitly tie the meaning of mental representations to the environment of the thinker.
For example, causal theories see for instance Dretske explain meaning in terms of causal regularities between environmental features and mental representations. For example, on a dark evening, someone might easily mistake a cow for a horse; in other words, a cow might cause the tokening of a mental representation that means horse. But if, as causal theories have it, the meaning of a representation is determined by the object or objects that cause it, then the meaning of such a representation is not horse , but rather horse or cow since the type of representation is sometimes caused by horses and sometimes caused by cows.
That is, if the representation was not caused by horses, then it would not sometimes be caused by cows. But this dependence is asymmetric: if the representation was not ever caused by cows, it would nevertheless still be caused by horses. As all of the above examples explain meaning in physical terms, the coupling of a successful CTM with a successful version of any of them would yield an entirely physical account of two of the most important general features of the mind: rationality and intentionality.
LOTH then, is the claim that mental representations possess combinatorial syntax and compositional semantics—that is, that mental representations are sentences in a mental language. This section describes four central arguments for LOTH. Fodor argued that LOTH was presupposed by all plausible psychological models. Fodor and Pylyshyn argue that thinking has the properties of productivity, systematicity, and inferential coherence, and that the best explanation for such properties is a linguistically structured representational system.
In short, the argument was that the only game in town for explaining rational behavior presupposed internal representations with a linguistic structure. The development of connectionist networks —computational systems that do not presuppose representations with a linguistic format—therefore pose a serious challenge to this argument.
In the s, the idea that intelligent behavior could be explained by appeal to connectionist networks grew in popularity and Fodor and Pylyshyn argued on empirical grounds that such an explanation could not work, and thus that even though linguistic computation was no longer the only game in town, it was still the only plausible explanation of rational behavior.
- Optical and Electronic Phenomena in Sol-Gel Glasses and Modern Application.
- Microeconomic Theory: Concepts and Connections?
- Analytic Philosophy | Internet Encyclopedia of Philosophy!
- What I love about Granada.
- The Wisdom of Failure: How to Learn the Tough Leadership Lessons Without Paying the Price.
- Italian Recipe sampler.
- Analytic Philosophy.
Their argument rested on claiming that thought is productive , systematic , and inferentially coherent. Productivity is the property a system of representations has if it is capable, in principle, of producing an infinite number of distinct representations. For example, sentential logic typically allows an infinite number of sentence letters A, B, C, Thus the system is productive.
The system is not productive. Productivity can be achieved in systems with a finite number of atomic representations, so long as those representations may be combined to form compound representations, with no limit on the length of the compounds. That is, productivity can be achieved with finite means by employing both combinatorial syntax and compositional semantics. Fodor and Pylyshyn argue that mental representation is productive, and that the best explanation for its being so is that it is couched in a system possessing combinatorial syntax and compositional semantics.
They first claim that natural languages are productive.
Richard J. Bernstein on Ethics and Philosophy between the Linguistic and the Pragmatic Turn
For example, English possesses only a finite number of words, but because there is no upper bound on the length of sentences, there is no upper bound on the number of unique sentences that can be formed. More specifically, they argue that the capacity for sentence construction of a competent speaker is productive—that is, competent speakers are able to create an infinite number of unique sentences.
Of course, this is an issue in principle. No individual speaker will ever construct more than a finite number of unique sentences. Nevertheless, Fodor and Pylyshyn argue that this limitation is a result of having finite resources such as time. The argument proceeds by noting that, just as competent speakers of a language can compose an infinite number of unique sentences, they can also understand an infinite number of unique sentences.