1.3 HUMAN MEMORY
Memory is the second part of our model of the human
as an information-processing system.
Three types of memory or memory function:
1.
sensory buffers,
2.
short-term memory or working memory, and
3.
long-term memory.
There is some disagreement as
to whether these are three separate systems or different functions of the same
system.
Figure 1.9 A model of the structure of memory
1.3.1 Sensory
memory
The sensory memories act as buffers for stimuli
received through the senses. A sensory memory exists for each sensory channel: iconic memory for visual stimuli, echoic memory for aural stimuli and haptic memory for touch. These memories
are constantly overwritten by new
information coming in on these channels.
Similarly, the existence of echoic memory is evidenced by our ability to
ascertain the direction from which a sound originates. This is due to
information being received by both ears. However, since this information is
received at different times, we must store the stimulus in the meantime. Echoic
memory allows brief ‘play-back’
of information. Have you ever had someone ask you a question when you
are reading? You ask them to repeat the question, only to realize that you know
what was asked after all. This experience, too, is evidence of the existence of
echoic memory.
Information is passed from
sensory memory into short-term memory by atten-tion, thereby filtering the
stimuli to only those which are of interest at a given time. Attention is the
concentration of the mind on one out of a number of competing stimuli or
thoughts. It is clear that we are able to focus our attention selectively,
choosing to attend to one thing rather than another. This is due to the limited
capacity of our sensory and mental processes. If we did not selectively attend
to the stimuli coming into our senses, we would be overloaded. We can choose
which stimuli to attend to, and this choice is governed to an extent by our arousal, our level of interest or need.
This explains the cocktail party phenomenon mentioned earlier: we can attend to
one conversation over the background noise, but we may choose to switch our
attention to a conversation across the room if we hear our name mentioned.
Information received by sensory memories is quickly passed into a more
permanent memory store, or overwritten and lost.
1.3.2
Short-term memory
Short-term memory or working memory acts as a
‘scratch-pad’ for temporary recall of information. It is used to store
information which is only required fleetingly. For example, calculate the
multiplication 35 × 6 in
your head. The chances are that you will have done this calculation in stages,
perhaps 5 × 6 and then 30 × 6 and added the results; or you may have used the
fact that 6 = 2 × 3 and calculated 2 × 35 = 70
followed by 3 × 70. To perform calculations such
as this we need to store the inter-mediate stages for use later. Or consider
reading. In order to comprehend this sentence you need to hold in your mind the
beginning of the sentence as you read the rest. Both of these tasks use
short-term memory.
Short-term memory can be accessed
rapidly, in the order of 70 ms. However, it also decays rapidly, meaning that
information can only be held there temporarily, in the order of 200 ms.
Short-term memory also has a
limited capacity. There are two basic methods for measuring memory capacity.
The first involves determining the length of a sequence which can be remembered
in order. The second allows items to be freely recalled in any order. Using the
first measure, the average person can remember 7 ± 2 digits. This was established in experiments by
Miller [234]. Try it. Look at the following number sequence:
265397620853
Now write down as much of the sequence as you can
remember. Did you get it all right? If not, how many digits could you remember?
If you remembered between five and nine digits your digit span is average.
Now try the following sequence:
Did you recall that more easily? Here the digits
are grouped or chunked. A
general-ization of the 7 ± 2 rule is that we can remember 7
± 2 chunks
of information. Therefore chunking information can increase the short-term
memory capacity. The limited capacity of short-term memory produces a
subconscious desire to create chunks, and so optimize the use of the memory.
The successful formation of a chunk is known as closure. This process can be generalized to account for the desire
to com-plete or close tasks held in short-term memory. If a subject fails to do
this or is pre-vented from doing so by interference, the subject is liable to
lose track of what she is doing and make consequent errors.
The sequence of chunks given above also makes use
of pattern abstraction: it is written in the form of a UK telephone number
which makes it easier to remember. We may even recognize the first sets of
digits as the international code for the UK and the dialing code for Leeds –
chunks of information. Patterns can be useful as aids
to memory. For example, most people would have
difficulty remembering the fol-lowing sequence of chunks:
HEC ATR
ANU PTH ETR EET
However, if you notice that by moving the last
character to the first position, you get the statement ‘the cat ran up the
tree’, the sequence is easy to recall.
In experiments where subjects were able to recall
words freely, evidence shows that recall of the last words presented is better
than recall of those in the middle. This is known as the recency effect. However, if the subject is asked to perform another
task between presentation and recall (for example, counting backwards) the
recency effect is eliminated. The recall of the other words is unaffected. This
suggests that short-term memory recall is damaged by interference of other
information. However, the fact that this interference does not affect recall of
earlier items provides some evidence for the existence of separate long-term
and short-term memories. The early items are held in a long-term store which is
unaffected by the recency effect.
Figure 1.10 A more detailed model of short-term memory
1.3.3
Long-term memory
If short-term memory is our working memory or ‘scratch-pad’, long-term
memory is our main resource. Here we store factual information, experiential
knowledge, procedural rules of behavior – in fact, everything that we ‘know’.
It differs from short-term memory in a number of significant ways. First, it
has a huge, if not unlim-ited, capacity. Secondly, it has a relatively slow
access time of approximately a tenth of a second. Thirdly, forgetting occurs
more slowly in long-term memory, if at all. These distinctions provide further
evidence of a memory structure with several parts.
Long-term memory is intended for the long-term
storage of information. Information is placed there from working memory through
rehearsal. Unlike work-ing memory there is little decay: long-term recall after
minutes is the same as that after hours or days.
Long-term
memory structure
There are two types of long-term memory: episodic
memory and semantic memory.
Episodic memory represents our memory of events and experiences in a serial
form. It is from this memory that we can reconstruct the actual events that
took place at a given point in our lives. Semantic memory, on the other hand,
is a structured record of facts, concepts and skills that we have acquired. The
information in semantic memory is derived from that in our episodic memory,
such that we can learn new facts or concepts from our experiences.
Semantic memory is structured in some way to allow access to
information, representation of relationships between pieces of information, and
inference. One model for the way in which semantic memory is structured is as a
network. Items are
associated to each other in classes, and may
inherit attributes from parent classes. This model is known as a semantic network. As an example, our
knowledge about dogs may be stored in a network such as that shown in Figure
1.11.
Specific breed attributes may be stored with each
given breed, yet general dog information is stored at a higher level. This
allows us to generalize about specific cases. For instance, we may not have
been told that the sheepdog Shadow has four legs and a tail, but we can infer
this information from our general knowledge about sheepdogs and dogs in
general. Note also that there are connections within the net-work which link
into other domains of knowledge, for example cartoon characters. This
illustrates how our knowledge is organized by association.
The viability of semantic networks as a model of
memory organization has been demonstrated by Collins and Quillian [74].
Subjects were asked questions about different properties of related objects and
their reaction times were measured. The types of question asked (taking
examples from our own network) were ‘Can a collie breathe?’, ‘Is a beagle a
hound?’ and ‘Does a hound track?’ In spite of the fact that the answers to such
questions may seem obvious, subjects took longer to answer ques-tions such as
‘Can a collie breathe?’ than ones such as ‘Does a hound track?’ The reason for
this, it is suggested, is that in the former case subjects had to search
fur-ther through the memory hierarchy to find the answer, since information is
stored at its most abstract level.
A
number of other memory structures have been proposed to explain how we
represent and store different types of knowledge. Each of these represents a
different
aspect of knowledge and, as such, the models can be
viewed as complementary rather than mutually exclusive. Semantic networks
represent the associations and relation-ships between single items in memory.
However, they do not allow us to model the representation of more complex
objects or events, which are perhaps composed of a number of items or
activities. Structured representations such as frames and scripts
organize information into data structures. Slots
in these structures allow attribute values to be added. Frame slots may contain
default, fixed or variable information. A frame is instantiated when the slots
are filled with appropriate values. Frames and scripts can be linked together
in networks to represent hierarchical structured knowledge.
Returning to the ‘dog’ domain, a frame-based representation of the
knowledge may look something like Figure 1.12. The fixed slots are those for
which the attribute value is set, default slots represent the usual attribute
value, although this may be overridden in particular instantiations (for
example, the Basenji does not bark), and variable slots can be filled with
particular values in a given instance. Slots can also contain procedural
knowledge. Actions or operations can be associated with a slot and performed,
for example, whenever the value of the slot is changed.
Frames extend semantic nets to include structured,
hierarchical information. They represent knowledge items in a way which makes
explicit the relative importance of each piece of information.
Scripts attempt to model the representation of
stereotypical knowledge about situ-ations. Consider the following sentence:
John
took his dog to the surgery. After seeing the vet, he left.
From our knowledge of the activities of dog owners
and vets, we may fill in a substantial amount of detail. The animal was ill.
The vet examined and treated the animal. John paid for the treatment before
leaving. We are less likely to assume the alternative reading of the sentence,
that John took an instant dislike to the vet on sight and did not stay long
enough to talk to him!
A script represents this default or stereotypical
information, allowing us to inter-pret partial descriptions or cues fully. A
script comprises a number of elements, which, like slots, can be filled with
appropriate information:
Entry
conditions Conditions that must be satisfied for the
script to be activated.
Result Conditions
that will be true after the script is terminated.
Props Objects
involved in the events described in the script.
Roles Actions
performed by particular participants.
Scenes The
sequences of events that occur.
Tracks A variation
on the general pattern representing an alternative scenario.
An
example script for going to the vet is shown in Figure 1.13.
A final type of knowledge
representation which we hold in memory is the repre-sentation of procedural
knowledge, our knowledge of how to do something. A com-mon model for this is
the production system. Condition–action rules are stored in long-term memory.
Information coming into short-term memory can match a condition in one of these
rules and result in the action being executed. For example, a pair of
production rules might be
IF dog
is wagging tail
THEN pat
dog
IF dog
is growling
THEN run
away
If we then meet a growling dog, the condition in
the second rule is matched, and we respond by turning tail and running. (Not to
be recommended by the way!)
Long-term
memory processes
So much for the structure of memory, but what about
the processes which it uses? There are three main activities related to
long-term memory: storage or remember-ing of information, forgetting and information
retrieval. We shall consider each of these in turn.
First, how does information get into long-term
memory and how can we improve this process? Information from short-term memory
is stored in long-term memory by rehearsal. The repeated exposure to a stimulus
or the rehearsal of a piece of informa-tion transfers it into long-term memory.
This process can be optimized in a number of ways.
Ebbinghaus performed numerous experiments on memory, using himself as a subject
[117]. In these experiments he tested his ability to learn and repeat nonsense
syllables, comparing his recall minutes, hours and days after the learning
process. He discovered that the amount learned was directly proportional to the
amount of time spent learning. This is known as the total time hypothesis. However, experiments by Baddeley and others
suggest that learning time is most effective if it is distributed over time
[22]. For example, in an experiment in which Post Office workers were taught to
type, those whose training period was divided into weekly sessions of one hour
performed better than those who spent two or four hours a week learning
(although the former obviously took more weeks to complete their training).
This is known as the distribution of
practice effect.
However, repetition is not enough to learn
information well. If information is not meaningful it is more difficult to
remember. This is illustrated by the fact that it is more difficult to remember
a set of words representing concepts than a set of words representing objects.
Try it. First try to remember the words in list A and test yourself.
List A:
Faith Age Cold Tenet Quiet Logic Idea Value Past Large
Now try list B.
List B:
Boat Tree Cat Child Rug Plate Church Gun Flame Head
The second list was probably easier to remember
than the first since you could visualize the objects in the second list.
Sentences are easier still to memorize. Bartlett
performed experiments on remembering meaningful information (as opposed to
meaningless such as Ebbinghaus used) [28]. In one such experiment he got
subjects to learn a story about an un-familiar culture and then retell it. He
found that subjects would retell the story replacing unfamiliar words and
concepts with words which were meaningful to them. Stories were effectively
translated into the subject’s own culture. This is related to the semantic
structuring of long-term memory: if information is meaningful and familiar, it
can be related to existing structures and more easily incorporated into memory.
So if structure, familiarity and
concreteness help us in learning information, what causes us to lose this
information, to forget? There are two main theories of forget-ting: decay and interference. The first theory suggests that the information held
in long-term memory may eventually be forgotten. Ebbinghaus concluded from his
experiments with nonsense syllables that information in memory decayed logarithmically,
that is that it was lost rapidly to begin with, and then more slowly. Jost’s law, which follows from this,
states that if two memory traces are equally strong at a given time the older
one will be more durable.
The second theory is that information is lost from
memory through interference. If we acquire new information it causes the loss
of old information. This is termed retroactive
interference. A common example of this is the fact that if you change
tele-phone numbers, learning your new number makes it more difficult to
remember your old number. This is because the new association masks the old.
However, some-times the old memory trace breaks through and interferes with new
information. This is called proactive
inhibition. An example of this is when you find yourself driving to your
old house rather than your new one.
Forgetting is also affected by emotional factors.
In experiments, subjects given emotive words and non-emotive words found the
former harder to remember in the short term but easier in the long term.
Indeed, this observation tallies with our experience of selective memory. We
tend to remember positive information rather than negative (hence nostalgia for
the ‘good old days’), and highly emotive events rather than mundane.
It is debatable whether we ever
actually forget anything or whether it just becomes increasingly difficult to
access certain items from memory. This question is in some ways moot since it
is impossible to prove that we do
forget: appearing to have for-gotten something may just be caused by not being
able to retrieve it! However, there is evidence to suggest that we may not lose
information completely from long-term memory. First, proactive inhibition
demonstrates the recovery of old information even after it has been ‘lost’ by
interference. Secondly, there is the ‘tip of the tongue’ experience, which
indicates that some information is present but cannot be satisfactorily
accessed. Thirdly, information may not be recalled but may be recognized, or
may be recalled only with prompting.
This leads us to the third process of memory:
information retrieval. Here we need to distinguish between two types of
information retrieval, recall and recognition. In recall the information is
reproduced from memory. In recognition, the presentation of the information
provides the knowledge that the information has been seen before. Recognition is
the less complex cognitive activity since the information is provided as a cue.
However, recall can be assisted by the provision of
retrieval cues, which enable the subject quickly to access the information in
memory. One such cue is the use of categories. In an experiment subjects were
asked to recall lists of words, some of which were organized into categories
and some of which were randomly organized. The words that were related to a
category were easier to recall than the others [38]. Recall is even more
successful if subjects are allowed to categorize their own lists of words
during learning. For example, consider the following list of words:
child
red plane dog friend blood cold tree big angry
Now make up a story that links the words using as vivid
imagery as possible. Now try to recall as many of the words as you can. Did you
find this easier than the previous experiment where the words were unrelated?
The use of vivid imagery is a common cue to help
people remember information. It is known that people often visualize a scene
that is described to them. They can then answer questions based on their
visualization. Indeed, subjects given a description of a scene often embellish
it with additional information. Consider the following description and imagine
the scene:
The engines roared above the noise of the crowd. Even in the blistering
heat people rose to their feet and waved their hands in excitement. The flag
fell and they were off. Within seconds the car had pulled away from the pack
and was careering round the bend at a desperate pace. Its wheels momentarily
left the ground as it cornered. Coming down the straight the sun glinted on its
shimmering paint. The driver gripped the wheel with fierce concentration. Sweat
lay in fine drops on his brow.
Without looking back to the passage, what color is
the car?
If you could answer that question you have
visualized the scene, including the car’s color. In fact, the color of the car
is not mentioned in the description at all.