Why human memory is not a bit like a computer’s

DisintegrationofPersistence(This is a cross-post of a 3 Quarks Daily article I wrote last year.)

A few months ago I attended a rather peculiar seminar at MIT’s Department of Brain and Cognitive Sciences. A neuroscientist colleague of mine named Robert Ajemian had invited an unusual speaker: a man named Jim Karol, who was billed as having the world’s best memory. According to his website, his abilities include “knowing over 80,000 zip codes, thousands of digits of Pi, the Scrabble dictionary, sports almanacs, MEDICAL journals, and thousands of other facts.” He has memorized the day of the week for every date stretching back to 1AD. And his abilities are not simply matter of superhuman willingness to spend hours memorizing lists. He can add new items to his memory rapidly, on the fly. After a quick look at a deck of cards, he can recall perfectly the order in which they were shuffled. I witnessed him do this last ‘trick’, as well as a few others, so I can testify that his abilities are truly extraordinary [1].

Such Stupendous Feats of Skill might seem more suited to a carnival or a variety show than to a university seminar room. The sheer strangeness of the event definitely aroused curiosity — the auditorium couldn’t contain everyone, so several of us had to watch on a TV screen in the overflow area. But along with the interest there was also a palpable sense of bemusement, bordering on derision. I could hear the murmurings: why would anyone need to memorize trivia in the era of cheap terabytes? And what could sober scientists learn from circus tricks, however astounding?

The truth is that Ajemian’s goal in inviting Karol was more about raising awareness than presenting data or theory. More specifically, he wants more brain scientists to think about how odd human memory seems if we compare it with computer memory. Decades of experience with electronics has led many people to think of memory as a matter of placing digital files in memory slots. It then seems natural to wonder about storage and deletion, capacity in bytes, and whether we can download information into the brain ‘directly’, as in the Matrix movies.

Thoout _Thoth_Deux_fois_Grand _le_Second_Hermés _N372.2AThe computer metaphor may seem cutting edge, but its essence may be as old as civilization  it is the latest iteration of the “inscription metaphor”. Plato, for example, described memory in terms of impressions on wax-tablets — the hard drives of the era. According to the inscription metaphor, when we remember something, we etch a representation of it in a physical medium — like carvings on rock or ink on paper. Each memory is then understood as a discrete entity with a specific location in space. In the case of human beings, this space is between the ears. Some memory researchers even use the term “engram” to refer to the neural counterpart of a specific memory, further reifying the engraving metaphor.

Before getting to the problems with the inscription metaphor, I should say that at a sufficiently fuzzy level of abstraction, it is not entirely useless. There is plenty of neuroscientific evidence that memories are tied to particular brain regions; damage to these regions can weaken or eliminate specific memories. So the general concepts of physical storage and localizability are the least controversial aspect of the inscription metaphor (at least at first glance).

The issue with the inscription metaphor is that it leaves out the aspects of human memory that are arguably the most interesting and mysterious — how we acquire memories and how we evoke them. When we look more closely at how humans form and recall memories, we may even find that the storage and localizability ideas need to be revised.

* * *

For a computer, all strings of bits are equally easy to store, regardless of what they represent. Similarly, a notebook is agnostic to the shapes of the markings made on a page. But human memory is not so impartial. Most people have to put considerable effort into remembering information. Even people who can commit things to memory with very little effort, like Jim Karol, have to intend to remember things in the first place. Effort and intention are not enough though — we routinely forget things we know are important, and at the same time we are unable to force unpleasant memories to wither and fade.

The specific form of the information determines how easily and accurately it will be remembered. A sequence of prose is much harder to commit to memory than a rhythmic, rhyming poem of the same length. Song lyrics often require no effort at all. Such memories may be boosted by the intimate link between music and emotion. Most people have surely noticed that joy, sorrow, pain, anger, disgust, and surprise can increase the vividness and persistence of a memory — though not always its accuracy! But even emotion isn’t the whole story. Some events are so traumatic that they become inaccessible. Complex sequences of events are especially hard to remember correctly, however important or emotionally intense they are, which is why eye-witness testimony is so untrustworthy.

The way humans come to recall memories may be even more mysterious than the ways in which we store them. To fully appreciate this, let’s compare it with recollection via computer or book. Locating an inscribed memory involves a handful of conceptually straightforward operations. To read about an incident in your diary, you have to know the date or the page number of the entry (and be able to physically navigate to the entry). To find a file on a standard computer, you have to know what the filename is and which folder you stored it in. If you know the name of the file but not the location, then you have to use a search process. Searching through an unindexed file system is an interesting experience — it is surprisingly slow compared to a Google search of the entire internet. This is because the computer has to check through every single memory location sequentially.

There are several types of memory recall that current technologies struggle with. Let’s say you want to read a diary entry from a few years ago, but all you have to start with is the knowledge that it involved meeting a quirky old man in a bookstore and having a conversation about Buddhism and Eastern Orthodoxy. You want to recall the name of the book he recommended [2]. You don’t know when this happened, and you don’t recall the name of the man or the bookstore. You have no choice really but to go through all your diary entries, either serially, or in random order. If it’s an electronic diary, you can hit ctrl-F and search through the entries, but you’d need to use your internal thesaurus to cycle through words you might have used in the entry. In other words, human memory is still central to the process.

Things become even trickier if we are searching for information that hasn’t been explicitly encoded in the original file. If you are old enough to remember having a folder full of mp3s on a hard drive, you’ll know that there is no way to search through it for “songs that sound Beatlesque” (unless you’re like me and already have a compilation with this name). Your computer has no idea what “Beatlesque” means, and the mp3 files aren’t tagged with such descriptors (which would have to be human-generated any how). A future machine learning app might be able to learn what “Beatlesque” means, and then tag all recorded songs that fit the description, but if the app is doing this using current methods, it will have to base its training on thousands of human-labeled samples. Once again, humans are providing the foundational linkages between memories.

* * *

The_Matrix_PosterHow exactly does the human brain enable us to perform recollections that are baffling from a technological perspective? Neuroscientists and psychologists haven’t yet been able to help the techies much. Perhaps more worrying is the fact that there is little awareness of the interesting features of human acquisition and recall, even among researchers. If we are ever going to understand human memory, more people need to think about what is distinctive about it. But the miracle of memory passes beneath most people’s notice; these abilities are so commonplace that we rarely pause to marvel at them.

Sometimes the best way to gain perspective on the ordinary is to take a detour through the extraordinary. So maybe we will learn something about our garden-variety memory skills by examining how Jim Karol manages to perform his barely-believable feats of memorization and recall. Karol has a method — he is not a savant, so he wasn’t born with talents that are beyond introspection. In fact he only began developing his skills at the age of 49. He was diagnosed with heart disease at this age, and began to ride an excer-cycle to get fit. Finding the hours of exercise boring, he began testing his memory. He started with playing cards, and eventually moved on to flash cards. He memorized actors’ names, movies, countries, and other bits and bobs of fact.

What he did next was crucial. He used one set of memories as a foundation on which to build more memories. He called it his ‘matrix’, and it consisted of a list of 100 movies. To remember additional items, he would find some kind of associative link between the item and the movie. So if he were asked to recall a list of 100 people, he would “put them in the movies”.

Karol’s method is a version of one of the oldest learning techniques known to humanity: the method of loci, or the ‘memory palace’ technique. You start with a place that is already familiar to you — a room or a building. You then associate objects in the room with particular items you wish to recall. The process of association has been recognized for thousands of years, and has served as the central idea in many theories of human memory, and of behavior in animals more generally. When two items or concepts are experienced in close temporal proximity, a bond typically forms between them.

But Karol’s association method goes beyond mere juxtaposition. The ideal thing to do is come up with a little story or image linking the object with the item — the more strange and surreal the better. In the case of the movie matrix, this might involve coming up with fanciful links between the item and a scene from one of the movies. This seemingly unnecessary information being added to the memory is actually crucial — it is the glue that attaches the target item to the object in the memory palace.

Karol also discovered for himself what memory palace builders have known for millennia: that memory is like a muscle, strengthening with use. So the more he uses his memory palace — now more akin to a memory planet — the more easily he is able to add new memories. Contrast this with a standard hard drive: the more you use it, the more it degrades, until eventually it must be replaced.

* * *

Memory researchers typically divide memory into two types: procedural and declarative. Procedural memory is know-how — learning to ride a bike, perform a dance move, or do a sports maneuver. Declarative memory is know-what — the things we can ‘declare’. It is typically divided into semantic memory (names, dates, facts and figures) and episodic memory (autobiographical incidents). Procedural memories usually require considerable repetition — practice seems to be the only way to ensure that a particular skill becomes second nature. Semantic memories often form without repetition. You can, for example, recall things you only encountered once, such as the plot of a movie, or even an exact line of dialogue. But repetition — both of the experience and of the exercise of expressing it — definitely helps. Episodic memories, by their very nature, cannot involve real repetition — an incident happens only once. These are the memories that receive a major boost from emotional signals. We remember tragedies and triumphs more intensely (but not necessarily more accurately) than mundane incidents. Repeating an account of an incident strengthens the memory, but it also has effects that may be less desirable. It seems as if we gradually replace the original memory with a modified version that aligns more closely with the shape of the story we repeat than with reality. This may be one reason that rarely-accessed memories feel so vivid — they haven’t been tampered with as much as our favorite personal anecdote.

The memory palace technique seems to blur the lines between these neat categories of memory, perhaps incorporating the best features of each. Since practicing the technique makes it easier to use it, there is clearly a procedural element. But the importance of the story-telling ‘glue’ suggests that aspects of episodic memory are also brought to bear, at least in early stages of palace construction. Funny or quirky anecdotes might provide a dash of emotion, creating episodic ‘secret passageways’ between semantic ‘rooms’.

Constructing the memory palace is only the first part of the story, however. How do we navigate through it? Do we have to walk through it serially, from one room to the next? Isn’t that just as laborious as wandering around a library, or finding a file on a computer? How do we come to know the address of a memory?

It seems part of the answer is that the content of the memory serves as its address. When presented with a fragment of a memory, we can reconstruct the rest, because it points us in the direction of the memory. And this seems to be true for all memories, regardless of whether they were acquired by the method of loci or the less elaborate and self-conscious methods the rest of us use. In technical circles this faculty is called content-addressable memory. It borrows the concept of an address from the inscription metaphor. We have versions of content-addressing that work in certain special-purpose computers, and simplistic models of how it might work in the brain. Such techniques can, for example, reconstruct a stored image when presented with just a part of it, or with a distorted version. These models count as progress, but they are at best the tip of the content-addressable iceberg that is human memory.

Content-addressable memory might sound a bit counterintuitive when you first encounter it. What if I don’t have any of the content of the memory? Here’s an example. If you are trying to remember the capital of Ethiopia, you might not initially have even a fragment of the answer. Even knowing that the capital starts with ‘A’ doesn’t necessarily help much. So how can content-addressable memory help explain how we recall the answer? The answer involves expanding our notion of what an individual memory is. The memory being reconstructed here is not just the name ‘Addis Ababa’. Rather, it is an ‘associative whole’ that includes the words ‘Ethiopia’, ‘capital’, and ‘Addis Ababa’. So when you consider parts of this whole, you have already found your way to the part of your memory palace where ‘Addis Ababa’ is likely to be located.

The concept of an associative whole suggests that we may never be able to delineate the boundaries of an individual memory. A single experiential ‘element’ can form part of multiple memories. In my own memory, the fragment ‘Addis’ is also accessible if someone asks me for the name of my favorite Ethiopian restaurant in Boston (Addis Red Sea). The memory also crops up if the topic is the South End of Boston (where Addis Red Sea is situated), or passable Indian restaurants (such as Mela, which is right by Addis Red Sea). There are countless associative paths in my memory shanty-town (too unplanned to be a palace, surely?) that lead me to Addis Ababa.

* * *

Main-qimg-3829ccac0a1406fbf89821adb521adcaVery often a particular experience serves as the gateway to a memory pathway you weren’t even aware of. Smells and tastes are particularly good at raising ‘buried’ memories from the dead. One of the most celebrated descriptions of memory comes from Marcel Proust’s In Search of Lost Time. The madeleine episode has become part of the lore of memory science:

“And as soon as I had recognized the taste of the piece of madeleine soaked in her decoction of lime-blossom which my aunt used to give me (although I did not yet know and must long postpone the discovery of why this memory made me so happy) immediately the old grey house upon the street, where her room was, rose up like a stage set to attach itself to the little pavilion opening on to the garden which had been built out behind it for my parents (the isolated segment which until that moment had been all that I could see); and with the house the town, from morning to night and in all weathers, the Square where I used to be sent before lunch, the streets along which I used to run errands, the country roads we took when it was fine.”

From the perspective of content-addressable memory, the taste of the madeleine and lime-blossom is a fragment of a much larger memory. What is striking about content-addressable memory is that it doesn’t lead to some kind of epileptic fit of free-association. Everything is potentially related to everything else, if we are loose enough in our criteria for relatedness. This means that an experience could be part of the “address” of an innumerable number of memories. How does the brain allow us to whittle down the options?

For now we can only speculate about the neuroscience, but the phenomenology of memory gives us clues. Jim Karol notes that forming associative links between his matrix and new items is greatly enhanced by stories or images. The more idiosyncratic these ‘linking ideas’ are, the better the memory is. You can experience this by playing the memory association game. Get a group of friends together in a circle. One friend starts with a random word, and then the next person says a word that is associated with that word in some way. You go in circles a few times — the more the better — and then you reverse the order. You have to remember what you said before. This can be surprisingly difficult. And you’ll always have a friend who comes up with a ridiculous association. Imagine Alice, Bob, and Chandran are sitting in a circle. Alice says “monkey”, Bob says “banana” and then Chandran says “Brexit”. Bob asks Chandran to kindly explain this wild leap, since during the return journey much later, Chandran will say “Brexit” and Bob will have to remember that he has to say “banana”. Chandran explains how he read about a woman in the UK who was planning to vote against Brexit, but on the way to the polling station, she went to a shop and bought a banana. It was a straight banana, which reminded her of a news item involving EU bureaucrats regulating the shape of bananas. She then proceeded to vote in favor of Brexit. This (true!) story will no doubt help Bob (and you, dear reader) form a long-lasting association between Brexit and bananas.

BananaMan_Introduction_ShotSo bananas and Brexit are now content in a unified memory, and each can be used to access the other. There are many places one can go from Brexit (unless you’re British?), and “banana” is not the most likely, so Chandran’s story helps bias things. But Bob’s memory doesn’t just contain “banana” and “Brexit”. They also contain Chandran, the room they are in with Alice, and the fact that they are playing the memory game. So the overall situation further constrains the list of possible associative pathways. In other words, there is a blurry line between content-addressable memory and context-addressable memory. Context is always intertwined with content.

This may be one reason why human memory can often be enhanced by adding detail. Additional information isn’t necessarily distracting, or a waste of capacity — it can also narrow down the number of possible memories that are relevant. Moreover, redundant connections can make an associative link more robust. If one pathway is blocked, another will take you to your destination.

* * *

As memories become more complex, it isn’t always clear what their content is. For most human memories, we don’t know what the form of the representation is, so we can’t be sure what specific content is available to the addressing and navigation system. To use the computer metaphor, we don’t know what the coding scheme is. How, for example, are my memories of songs represented? I can use the word “Beatlesque” to hunt for songs that have certain melodic, harmonic or timbral qualities. I can recall songs that are related in terms of era, genre, tempo, or lyrical content. I can also recall a song using a concept I wasn’t even aware of when I first heard it. For example, at some point I became aware of the concept of musical pastiche or parody. I had no idea about such things when I first heard Sgt. Pepper’s Lonely Heart’s Club Band as a kid. But once I was introduced to this concept, I could recognize that ‘When I’m Sixty-Four’ has an element of 1920s pastiche. I can conjure up memories of other songs of this sort — specifically the ones referencing pre-war pop music, or more broadly any song that gives an exaggerated nod to the past.

The question is this: what is the form of my memory of an individual song, if I can recognize features of it using criteria that weren’t available to me when I first heard it? It seem as if whenever I learn new ways of thinking about music, these ‘filters’ are retroactively applied to all songs in my mental database. The only alternative seems to be that new aesthetic filters act on memories as they emerge — but that doesn’t really account for how they emerge in the first place, often at the very moment the measuring device is introduced.

Regardless of the mechanistic details, what is striking is how rapid this retroactive filtering process can be. A single image, sound, or phrase becomes a seed crystal around which memory links grow outwards, enabling new pathways between experiences that previously seemed unrelated. Your ability to sort through your own memories and recognize patterns in them can be enhanced by the fortuitous arrival of some idea or experience. Progress during psychotherapy can involve moments of this sort, when an idea seems to cause the puzzle pieces of one’s life to suddenly snap together.

If we recognize that human memory consists not simply of items of experience but also the links between them, then we have to consider the possibility that each new incident or idea, beyond just being housed in an empty room in the memory palace, may also alter the very architecture of the palace, creating new passageways. Mixing metaphors somewhat, ideas and experiences have the potential to restructure the geography of memory space in both gradual and sudden ways.

This mental restructuring is one way to think about how scientists make theoretical breakthroughs. Charles Darwin had a vast amount of knowledge about biology and geology, but it was his reading of Thomas Malthus’s An Essay on the Principle of Population — with its dire image of humans struggling with each other due to overpopulation — that finally enabled him to conceive of natural selection. The web of links between the disparate facts floating around in Darwin’s memory began growing as a result of his encounter with Malthusian competition. Interestingly, Malthus also seems to have been instrumental in the thinking of Alfred Russel Wallace, the co-discoverer of evolution through natural selection.

It’s hard to imagine anyone coming up with such a far-reaching theory if all their scientific knowledge were compartmentalized in a hard drive, with each memory isolated from all others. This points us to a core weakness of the inscription metaphor: it leads us to think of memory as an array of discrete, changeless objects. An inscribed memory is like an antisocial recluse — it doesn’t interact at all with the neighbors in adjacent rooms of the memory palace. It forms no bonds of commonality. Like Dorian Gray, it does not age or mature in any way. It is perfect in its integrity, but also inflexible and sterile. Human memories, by contrast, are like young people on social networks, forming new connections promiscuously, and thereby creating opportunities for self-transformation and new collective phenomena. Of course this malleability means that human memories, like many young people, can also be capricious and unreliable, flaking out at unexpected moments.

Civilization tends to bring the two modes of memory together — static inscription and dynamic association. In the past, the inscriptions were always in danger of fading away — eroded by wind and water or swallowed by the jungle. We typically lament the frailty of human memory, but could there also be some benefit to its blurriness and impermanence? We’d better find out soon, because the tech world may be paving the way towards a future free from forgetting [3], so this mixed blessing may be taken away from us.


Notes

[1] There is a short video of Jim Karol’s MIT visit on YouTube.

[2] Actually I do remember the bookstore: it was Brookline Booksmith. An old man approached me in the used book basement, near the philosophy and religion shelf. He said something like “I’ll bet you don’t have stores like this in Delhi!” I explained that this was not in fact true. We started talking, and I found out that he knew quite a bit about the Malankara Orthodox Church, which is the denomination my family belongs to. He had been in the navy during Vietnam, and worked as a night watchman at the Museum of Fine Arts. The nocturnal silence of the museum apparently helped him with his Buddhist meditation and Christian prayer. The book he mentioned was called The Silent Roots: Orthodox Perspectives on Christian Spirituality. It was written by an acquaintance of his named Father K. M. George, a priest from my home state of Kerala. Fr George was the principal of the main seminary, and I just happened to have been baptized at the church nearby. As it turns out, a few years later this priest became a mentor for a very good friend of mine, who recently enrolled at Harvard Divinity School. Over the years this memory has become embedded in a web of associations.

[3] This is a scenario rendered both disturbing and plausible in an episode of Black Mirror called The Entire History of You.

 

~

 

The Proust-madeleine artwork was found here: full-stop.net