<$BlogRSDURL$>
Trying To Think
Wednesday, June 16, 2004
 
Main Essay - Theories of Memory
Memory is not a simple phenomenon, and there are two traditional divisions: procedural memory (embodied skills and habits, which can referred to as “knowing how”) and propositional memory (or “semantic memory” - memory for facts that can be referred to as “knowing that”). A third type of memory, often called episodic memory, consists of the recollecting episodes in one’s personal life. Episodic memory is either considered a third form of memory, or it is grouped with propositional memory and called “declarative memory”, since both kinds of memory are meant to represent the world (Sutton, 2004, pg2). This grouping of memory into two or three types is appealing, but not final or uncontroversial: Eichenbaum and Cohen (2002) point out that "there is at this time no consensus on just how many memory systems there are or on how to categorize them according to cognitive dimensions" (pg13). Conner and Carr (1982) add that “if we look at the very varied forms that our memories take, it is not easy to draw any hard and fast lines between them” (pg 206). They continue to argue that memory can be thought of as a continuum of related phenomena, with episodic memories containing the most individual and perceptual detail, and habit memory being “an accumulated compost of experience stripped of their individuating properties and their wealth of perceptual details” (pg218). There is a variety of phenomena that are grouped under the heading of memory, but since all share the function of allowing access to the past, and since there seems to be no clear demarcation between the phenomena, it is tempting to assume that there is a single underlying mechanism. This essay will examine what this mechanism (or mechanisms) could be.

How it is possible for the past to be known to us? One extreme answer is to argue that the past is directly accessible to us. Known as direct realism, this theory seems unintuitive, since the thing that seems to distinguish past events from present ones is that past events are no longer directly accessible. However, proponents of direct realism argue that it is the most straightforward way of interpreting our claims about memory. For example, Laird (1920) asserts that “memory does not mean the existence of present representatives of past things. It is the mind’s awareness of past things themselves … memories can be explained by the hypothesis of direct acquaintance with the past without further ado” (pg56-7). This, of course, is not an explanation at all, as Laird himself later admits: “It is plainly impossible to explain the fact of memory. Memory is possible, and that is all we need to know” (pg59). While it is a straightforward way of interpreting our claim to memory, Laird’s version of direct realism does not advance any explanation of memory. Our goal is to explain the phenomena, and not simply describe it, as Sutton (2003) asserts: “The genuine phenomenology of ‘direct’ access to the past … cannot be deemed primitive and inexplicable” (pg7).

In contrast to direct realism, indirect realism claims that it is a representation that we observe, and not reality itself. In the case of memory, it is the observation of this representation that creates the experience of remembering the past. The subject is therefore removed from the past, cut off “behind a veil of memory ideas” (Sutton 2003, pg3), so that the past itself is never observed directly. Aristotle (1973) argued that our memory consists of images or pictures – “a copy or a souvenir” (pg107) that represent our experience. He uses the example of a picture, which can be simply considered as a picture, but fulfils its intended function when it is considered to stand-in for the thing that the picture naturally represents. The picture represents this thing simply because it resembles it. Locke (1999) continues this theory: in memory (and perception in general), there is a “double awareness” of the internal object (the “idea”, which is perceived) and the material thing, which is seen (pg599). Aristotle claimed that images were essential to memory, and traditionally the role of imagery continued to be thought of as not only central, but essential to memory.

One particular problem for the image theory of memory is that of distinguishing the images of memory from those of imagination or perception. Audi (1998) says that the memory images “might even be sense-data if they are vivid enough” but usually “my memorial images … might be conceived as a kind of residue of perception” (pg60). Given the great similarity, how is it that we reliably tell them apart? Two historical replies to this are from Hume and Russell (both quoted in Dancy, 1985, pg186). Hume’s attempts to answer this (that the memory images are more vivid, forceful and lively than imagination), and Russell’s (that there is a notion of pastness, or familiarity, associated with memory), describe the fact that we distinguish memory from imagination, but do not provide any detail of how this is done.

The image theory provides a good description of episodic memory, for remembering our past experiences is reasonably like accessing a perceptual image of them. However, apart from the problem of distinguishing memories from perception or imaginings, it is not clear how the image is created (and why some experiences are memorable, and others not), how is it accessed at the appropriate time, and why the perceptual image differs in some ways to the original perception (for example, we usually remember our experiences from an outsider’s point-of-view, Jaynes, 1976, pg29).

The centrality of images is also open to dispute. Although images play at least a phenomenological role in many of our memories, this is not always the case, and some people experience little imagery for any memories (Conner and Carr, 1982, pg210). Audi (1998) claims that “remembering an event surely does not require acquaintance with an image of it”, and he goes on to argue that “I might remember what color your sweater was even if I cannot bring the color itself to mind” (pg61). While a memory may, at times, seem to consist simply in the recollection of an image, “if memory can work equally well without them their role is clearly an inessential one” (Conner and Carr, 1982,, pg210). Susan Engel argues that "one creates the memory at the moment one needs it, rather than merely pulling out an intact item, image, or story" (quoted in Sutton, 2004, pg9). It is the components that produce such images that are the basis of memory, not the phenomenological image that is produced. Additionally, consideration of non-episodic memory argues against the importance of images. It is difficult to see how images could play any role in procedural memory, or in the semantic memory of propositions. The image theory can only be applied to episodic memory, and has little explanatory power even there.

The trace theory maintains that, instead of being mediated by images, the past is preserved by neurological traces. There are versions of trace theory that see these traces as local and distinct. For example, Robert Hooke thought memories are "in themselves distinct; and therefore that not two of them can be in the same space, but that they are actually different and separate one from another" (quoted in Sutton 2004, pg 5). However, modern trace theories more usually refer to dynamic, distributed systems. Recall of an event is not seen as finding the correct item that represents that event; instead “occurent remembering is the temporary activation of a particular pattern or vector across the units of a neural network” (Sutton 2004, pg3).

The mechanism of procedural memory in simple organisms has been described in these terms (for example, Eric Kandel’s work in the mid-1970s, on the Californian sea snail Aplysia, described in Spitzer, 1999, pg42). If we are willing to extend this neurological basis to all types of memory, we will need to explain how the same mechanism can produce the apparent principled difference between the types of memory, i.e. the obvious phenomenological difference. Such differences can be accommodated in the theory in two ways. Firstly, different types of memory can be mediated by different neurological structures. Eichenbaum and Cohen (2001) have demonstrated that "declarative memory supports a relational representation ... conversely, non-declarative forms of memory, such as procedural memory, involve individual representations ... such memories are isolated in that they are encoded only with the brain modules in which perceptual or motor processing is engaged during learning" (pg54). Additionally, there are two general ways of encoding memory: bias, or modulating response to stimulus (briefly for working memory, or longer term for cortical maps for stimuli), and ability to sustain or reactivate response in absence of the stimulus (pg133). The second way to accommodate the difference between types of memory is in the mechanism that actualises the trace, and so constructs the memory. The same basic information about a past event may be recalled in the context of an autobiographical recollection, or as a proposition, if it can be encoded in a way that is appropriate to both forms of recollection.

It is an important aspect of modern trace theory that “traces (whatever they may be) are "merely potential contributors to recollection", providing one kind of continuity between experience and remembering; so traces are invoked merely as one relevant causal/ explanatory factor” (Suytton, 2004, pg6). That is, the trace is only that part of the mechanism that provides the causal link to the past, and should not be identified as the whole system of memory and recollection. Laird (1920) agrues that “the mere fact that the brain endures and retains traces of former simulation does not explain memory” (pg59), and it is the case that the mechanisms of recall are an important aspect of memory. Modern trace theory accepts this point and acknowledges "the engram (the stored fragments of an episode) and the memory … are not the same thing" (Schacter, quoted in Sutton, 2004, pg6).

The major problem that faces a trace theory is to explain how the traces represent the events which caused them. This problem of representation does not affect the image theory, for the image was a simply copy, or residue, of the perceptual event (although Dennett, 1997, pg69, sees this view as suffering from a circular definition of resemblance). Martin and Deutscher argue that an analysis of remembering should include the requirement that (in cases of genuine remembering) "the state or set of states produced by the past experience must constitute a structural analogue of the thing remembered" (quoted in Sutton, 2004, pg7). Some structural relationship must exist between the traces and the events, else they do not represent the events. The traces cannot be a mere copy, or residue, of the perceptual experience (otherwise we would have an image theory of memory), but instead must encode the relevant features in ways that can enable the later recall.

Spitzer (1999, pg83ff) reports on various experiments on rats which demonstrate one possible relationship between the traces and the experiences they represent. A map-like structure is developed in the hippocampus that exhibits a simple resemblance to the surroundings that it models. But this simple structural isomorphism cannot explain the relationship between traces and other events, which do not have a spatial structure. How should the taste of sangria be encoded in a spatial network? What are the relevant features, and how can they be represented? Answering these questions is a difficulty for trace theory. In fact, the problem is greater than simply identifying a possible mapping, for the structures involved “need not remain the same over time, or might not always involve identifiable determinate forms over time” (Sutton 2004, pg8). In fact, for dynamic versions of trace theory, the traces cannot remain the same, for they must “live with our interests and with them they change” (Bartlett, quoted in Sutton, 2004, pg8), yet they must continue to represent, at least at some level, the same thing.

Eichenbaum and Cohen (2001) argue that “memory should be conceived as being intimately intertwined with information processing in the cortex, indeed so much so that the ‘memory’ and ‘processing’ are inherently indistinguishable … information processing and memory combine to constitute the structure of our knowledge about the world” (pg133). Memory consists in nothing more than the fact that our cognitive systems change as a result of the information they process, and those changes affect later processing. A memory trace is a change that can be used as the basis of an occurrent recall. Andy Clarke (2001) speaks of memory traces as functioning as

“internal stand-ins for potentially absent, abstract, or non-existent states of affairs. A ‘stand-in’, in this strong sense is an item designed not just to carry information about some state of affairs … but to allow the system to key its behaviour to specific states of affairs even in the absence of direct physical connection” (pg 129)

A memory trace should not be considered as a simple, passive carrier of information about the past. Instead, it is a change to a system (such as consciousness or occurrent semantic knowledge) that affects behaviour just as if the system was able to gain direct access to those aspects of the past. A full description of how the trace represents the event is therefore not to be given as a set of mapping rules, but instead is one aspect of a description of the behaviour of the occurrent system. A full description of the occurrent system is required to understand the function and meaning of the representation. This can be quite involved; for example, Sutton (2003) points out that autobiographical memory involves “the internalisation of cultural schemes”, which would provide the appropriate scaffolding on the top of “flexible internal processes” (pg5). An explanation of autobiographical memory would therefore span personal, subpersonal, and social levels of explanation. The problem here is the same as the general problem of the mental representation of meaning. Dennett (1997) argues that a representation “means what it does because of its particular position in the ongoing economy of your brain’s internal activities and their role in governing your body’s complex activities” (pg70). This is not to say that structural resemblances cannot be found (for example, in the map-like structures of the hippocampus), but there is no need to insist that they must be found at the neurological level. Sutton (2004) argues that “the structures which underpin retention … might not always involve identifiable determinate forms over time” (pg8).

Direct realism and the image theory are both appealing descriptions of the phenomena of memory. However, to actually explain memory, we need a theory that details how we actually store and then access the events of the past, to produce these phenomena. Trace theory is such a theory, which appeals to the neurological changes left by these past events, and describes how these may be used to affect current behaviour and enable us to construct our memory of past events.


Word count: 2487

References

Aristotle, “De sensu and De memoria”, translation by G. R. T. Ross, New York : Arno Press, 1973.

Audi, Robert. “Epistemology: a contemporary introduction to the theory of knowledge” Routledge: New York, 1998

Clarke, Andy, “Reasons, Robots and the Extended Mind”, Mind and Language, Vol 16 No 2 April 2001, pp 121-145, Blackwell Publishers: Oxford

Conner, D.J. and Carr, Brian (1982) “Memory” Ch 5 in “Introduction to the Theory of Knowledge”, Knowledge and Reality: Selected Readings”, Sydney: Macquarie University, 2004

Dancy, J, “Introduction to Contemporary Epistemology”, Oxford: Blackwell, 1985

Dennett, Daniel C , “Kinds of minds : towards an understanding of consciousness”, London : Phoenix, 1997

Howard Eichenbaum, Neal J. Cohen, “From conditioning to conscious recollection : memory systems of the brain”, Oxford : Oxford University Press, 2001

Julian Jaynes, “The Origin Of Consciousness in The Breakdown of the Bicameral Mind”, 1976, Houghton Mifflin, New York.

Laird, John, “A Study in Realism”, Cambridge 1920

Locke, “An Essay Concerning Human Understanding”, in Readings in Epistemology, compiled by Jack S. Crumley II, Mountain View, Calif. : Mayfield Pub. Co., 1999.

Spitzer, Manfred, “The Mind Within The Net :models of learning, thinking, and acting”, Cambridge, Mass : The MIT Press, 1999

Sutton, John, “Memory: Philosophical Issues”, in the Encyclopedia of Cognitive Science, ed. Nadel, 2003

Sutton, John "Memory", The Stanford Encyclopedia of Philosophy (Summer 2004 Edition), Edward N. Zalta (ed.), forthcoming URL = .


 
Second Short Essay - Coherence Theory of Justification
The coherence theory of justification, or “epistemic coherentism”, claims that “all justification of beliefs depends on coherence within a system of beliefs” (pg82, Moser et al, 1998). It is usefully contrasted with foundationalism, in which “the direction of justification is all one-way, and … there are some comparatively fixed points in the structure, the basic beliefs” (pg110, Dancy, 1985); i.e. we justify any given belief by referring to other, more basic beliefs. The coherence theory of justification does without these basic beliefs, and justifies beliefs on the coherence of the belief-set they create. This belief-set is coherent “to the extent that the members are mutually explanatory and consistent.” (p112, Dancy, 1985)

One objection to epistemic coherentism is that it does not seem to be compatible with empiricism - “the view that the evidence of the senses … is a sort of evidence appropriate to genuine knowledge”, pg188, Moser et al, 1998. If we are empiricists, then our belief-set needs to be more than simply coherent –our beliefs must also be consistent with our experience. The empiricist objection to epistemic coherentism has two parts: one part is based on whether perceptions can be part of the belief-set; the other is the special status of perceptual beliefs.

The first can be called the “isolation objection” (pg85, Moser et al, 1998). Beliefs are justified by reference to the coherence of the belief-set, but this belief-set traditionally excludes data such as perceptual states, which are non-propositional. To allow for empiricism, the gap between experience and belief must somehow be bridged. Naturally, other models of belief must also bridge the gap between experience and belief. For example, the empirical foundationalist wishes for his basic beliefs to be directly related in some way to non-propositional perceptual states. The isolation objection can be raised against any model of belief, and if the required link between perception and belief cannot be established, then it is empirical justification in general that has been undermined, and not only an empirical coherence model of justification.

One way of allowing interaction is to deny that there is a fundamental distinction between belief and experience (following Kant, as quoted in Darcy, 1985). In this case, it would arbitrary as to whether we extend the belief-set to include experience, or allow experience to somehow influence the set. In either case, beliefs that are wholly disconnected from experience could not be justified.

The second part to the objection is that beliefs which are grounded in experience ought to have a privileged justification, by virtue of having been caused by experience. There is no place for any special roles within simple coherence, because the nature or source of the belief is irrelevant; justification is provided only by the belief’s effect on the coherence of the set.
One response to this is to propose a weaker version of coherency theory, which allows for differences between beliefs. This theory would distinguish between beliefs with “antecedent security” (the security or justification that a belief brings with it, regardless of coherence) and “subsequent security” (acquired through a belief’s contribution to the coherency of the belief-set). This form of coherency seems simply to be “another name for a form of foundationalism” (pg122, Darcy, 1985), since the beliefs with antecedent security will provide a foundation for those with only subsequent security.
A second response is to acknowledge that sensory beliefs have the following status: we accept them as true so long as nothing counts against them. This acknowledgment seems to fit the demands of empiricism. We can then argue that this is our approach to all beliefs: “any belief will remain until there is some reason to reject it” (pg124, Darcy, 1985). We are naturally credulous, and this credulity is essential for any learning: “For how can a child immediately doubt what it is taught? That could only mean that he was incapable of learning certain language games.” (283, Wittgenstein, 1999)

Arguably, there ought to be an additional empirical demand on our theory of justification: that it take more evidence to reject a sensory belief than a non-sensory one. Coherence theory can accommodate this demand by adding “stubborn empiricism” as a belief to the belief-set. If our belief-set contains the belief that sensory experience is inherently reliable, only overwhelming incoherence from other beliefs will be sufficient to reject a sensory belief (Darcy, 1985). This seems to be a better alternative to weak coherency, for it not only a simpler theory, but allows for belief sets that are not empirical (such as delusional or fictional belief sets).

The coherence theory of justification is, therefore, compatible with empiricism. Perceptions can influence the belief-set, and so ground it in experience, and the special role of perceptual beliefs demanded by empiricism can be handled without resorting to foundationalism.

 
First Short Essay - JTB account of knowledge
The traditional account of knowledge attempts to describe the requirements of propositional knowledge. It states that there are three requirements, all of which must be fulfilled. First, a person claiming to know something is also claiming to believe it; Moser et al (1998) describe belief as “a logically necessary condition for knowing” (pg15). However, mere belief is insufficient for knowledge; the statement must be a true one. In the past, people believed that the earth was flat, but they were wrong if they claimed to know the earth was flat, since it is not (Moser et al, 1998, pg15). In particular, it is obvious that our beliefs can be mistaken, and having truth as a prerequisite for knowledge allows for this fact (Moser et al, 1998, pg74-75). Additionally, lucky guesses, even if believed, do not count as genuine knowledge – the true belief must have “supporting reasons” (Moser et al, 1998, pg15). This justification is the third requirement of knowledge, and completes the justified true belief (JTB) definition. In the words of Moser et al, “If you have good reasons in support of the truth of your belief, and your belief is true and is based on good reasons, then you have knowledge, according to the traditional analysis” (Moser et al, 1998, pg16).

We will set aside à priori statements as special cases of knowledge, and instead focus on à posteriori statements which make claims about the external world, such as “The earth is not flat”, and “London is the capital of England”. In particular, we will focus on “Gettier-style counterexamples”. The following is based on one of the counterexamples in Gettier, 2004:

Smith and Jones are both applying for a job. Smith believes, and has evidence for, two facts: (a) Jones will get the job, and (b) Jones has 10 coins in his pocket. Smith concludes, and has a justified belief, that (c) the person who gets the job will have 10 coins in their pocket. However, it turns out that Smith gets the job; however, since Smith happens to have 10 coins in his pocket, (c) remains true. Although (c) is an example of justified true belief, it is not intuitive to say that Smith knew (c).

In this example, and others like it, belief and truth are treated as simple, definite and atomic. On analysis, few of our beliefs are absolute, and many propositions are too complex for a simple truth value. However, restating the problem with a more complex analysis of Smith’s belief-states does not seem to remove the quandary. Also, the treatment of truth, though naïve, seems sufficient for the simple propositions that the example uses. Instead, Gettier-style counterexamples highlight that we need to be more specific about what we mean by "justification". They show that a justified belief can be true, and yet not count as knowledge, when the justification, though seemingly overwhelming, is only coincidently related to the truth. We need to add a further condition to the JTB analysis, which clarifies the relationship between justification and the truth.

Two such conditions have been proposed (Moser et al, 1998, pg95-98):
1. A causal relationship. We naturally expect that our justification will have something to do with the relevant facts of the world. The situation we assume is that our justification is caused by the relevant facts of the world. Obviously, something will have caused our justification; the issue is whether the cause was related to the proposition in question. In Gettier-style counterexamples, this is not the case.
2. No defeating facts for our justification. We do not expect there to be additional facts that reveal one or more of our assumptions to be incorrect. Furthermore, we also do not expect there to be further evidence that, while not actually dismissing our previous justification, would add justification to opposing propositions.

We naturally expect objective reality to be consistent. This consistency means that justification that is caused by a fact of the world would not be defeated by other facts of the world. Therefore, given our intuitions about objective reality, we would expect this second condition to be a consequence of the first. The defeating facts in Gettier-style counterexamples can act as our addition criterion of knowledge because they reveal that the justification is unrelated to the truth.

If the facts of the world have not caused our justification, then we do not have knowledge. This lack of causal relationship means that are defeating facts. In the Gettier-style counterexamples, we can see this lack of causal relationship, and defeating facts are exhibited that undermine the justification. The counterexamples challenge our intuitions about knowledge, and force us to analyse the concept in greater detail. This analysis reveals a further condition implicit to the concept of knowledge: our justification must be related to the truth.





Powered by Blogger