The End of Remembering
Once upon a time, there was nothing to do with thoughts except remember them. There was no alphabet to transcribe them in, no paper to set them down upon. Anything that had to be preserved had to be preserved in memory. Any story that would be retold, any idea that would be transmitted, any piece of information that would be conveyed, first had to be remembered.
Today it often seems we remember very little. When I wake up, the first thing I do is check my day planner, which remembers my schedule so that I don’t have to. When I climb into my car, I enter my destination into a GPS device, whose spatial memory supplants my own. When I sit down to work, I hit the play button on a digital voice recorder or open up a notebook that holds the contents of my interviews. I have photographs to store the images I want to remember, books to store knowledge, and now, thanks to Google, I rarely have to remember anything more than the right set of search terms to access humankind’s collective memory. Growing up, in the days when you still had to punch seven buttons, or turn a clunky rotary dial, to make a telephone call, I could recall the numbers of all my close friends and family. Today, I’m not sure if I know more than four phone numbers by heart. And that’s probably more than most. According to a survey conducted in 2007 by a neuropsychologist at Trinity College Dublin, fully a third of Brits under the age of thirty can’t remember even their own home land line number without pulling it up on their handsets. The same survey found that 30 percent of adults can’t remember the birthdays of more than three immediate family members. Our gadgets have eliminated the need to remember such things anymore.
Forgotten phone numbers and birthdays represent minor erosions of our everyday memory, but they are part of a much larger story of how we’ve supplanted our own natural memory with a vast superstructure of technological crutches—from the alphabet to the BlackBerry. These technologies of storing information outside our minds have helped make our modern world possible, but they’ve also changed how we think and how we use our brains.
In Plato’s Phaedrus, Socrates describes how the Egyptian god Theuth, inventor of writing, came to Thamus, the king of Egypt, and offered to bestow his wonderful invention upon the Egyptian people. “Here is a branch of learning that will ... improve their memories,” Theuth said to the Egyptian king. “My discovery provides a recipe for both memory and wisdom.” But Thamus was reluctant to accept the gift. “If men learn this, it will implant forgetfulness in their souls,” he told the god. “They will cease to exercise their memory and become forgetful; they will rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminding. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them anything, you will make them seem to know much, while for the most part they will know nothing. And as men filled not with wisdom but with the conceit of wisdom, they will be a burden to their fellow-men.”
Socrates goes on to disparage the idea of passing on his own knowledge through writing, saying it would be “singularly simple-minded to believe that written words can do anything more than remind one of what one already knows.” Writing, for Socrates, could never be anything more than a cue for memory—a way of calling to mind information already in one’s head. Socrates feared that writing would lead the culture down a treacherous path toward intellectual and moral decay, because even while the quantity of knowledge available to people might increase, they themselves would come to resemble empty vessels. I wonder if Socrates would have appreciated the flagrant irony: It’s only because his pupils Plato and Xenophon put his disdain for the written word into written words that we have any knowledge of it today.
Socrates lived in the fifth century B.C., at a time when writing was ascendant in Greece, and his own views were already becoming antiquated. Why was he so put off by the idea of putting pen to paper? Securing memories on the page would seem to be an immensely superior way of retaining knowledge compared to trying to hold it in the brain. The brain is always making mistakes, forgetting, misremembering. Writing is how we overcome those essential biological constraints. It allows our memories to be pulled out of the fallible wetware of the brain and secured on the less fallible page, where they can be made permanent and (one sometimes hopes) disseminated far, wide, and across time. Writing allows ideas to be passed across generations, without fear of the kind of natural mutation that is necessarily a part of oral traditions.
To understand why memory was so important in the world of Socrates, we have to understand something about the evolution of writing, and how different early books were in both form and function. We have to go back to an age before printing, before indexes and tables of contents, before the codex parceled texts into pages and bound them at the edge, before punctuation marks, before lowercase letters, even before there were spaces between words.
Today we write things down precisely so we don’t have to hold them in our memories. But through at least the late Middle Ages, books served not as replacements for memory, but rather as memory aids. As Thomas Aquinas put it, “Things are written down in material books to help the memory.” One read in order to remember, and books were the best available tools for impressing information into the mind. In fact, manuscripts were often copied for no reason other than to help their copier memorize them.
In the time of Socrates, Greek texts were written on long, continuous scrolls—some stretching up to sixty feet—pasted together from sheets of pressed papyrus reeds imported from the Nile Delta. These texts were cumbersome to read, and even more cumbersome to write. It would be tough to invent a less user-friendly way of accessing information. In fact, it wasn’t until about 200 B.C. that the most basic punctuation marks were invented by Aristophanes of Byzantium, the director of the Library of Alexandria, and all they consisted of was a single dot at either the bottom, middle, or top of the line letting readers know how long to pause between sentences. Instead, words ran together in an unending stream of capital letters known as scriptio continua , broken up by neither spaces nor punctuation. Words that started on one line would spill over onto the next without even a hyphen.
ASYOUCANSEEITSNOTVERYEASYTOREADTE XTWRITTENWITHOUTSPACESORPUNCTUATI ONOFANYKINDOREVENHELPFULLYPOSITIO NEDLINEBREAKSANDYETTHISWASEXACTLY THEFORMOFINSCRIPTIONUSEDINANCIENT GREECE
Unlike the letters in this book, which form words that carry semantic meaning, letters written in scriptio continua functioned more like musical notes. They signified the sounds that were meant to come out of one’s mouth. Reconstituting those sounds into discrete packets of words that could be understood first required hearing them. And just as it is difficult for all but the most gifted musicians to read musical notes without actually singing them, so too was it difficult to read texts written in scriptio continua without speaking them aloud. In fact, we know that well into the Middle Ages, reading was an activity almost always carried out aloud, a kind of performance, and one most often given before an audience. “Lend ears” is a phrase often repeated in medieval texts. When St. Augustine, in the fourth century A.D., observed his teacher St. Ambrose reading to himself without moving his tongue or murmuring, he thought the unusual behavior so noteworthy as to record it in his Confessions. It was probably not until about the ninth century, around the same time that spacing became common and the catalog of punctuation marks grew richer, that the page provided enough information for silent reading to become common.
The difficulties associated with reading such texts meant that there was a very different relationship between reading and memory than the one we know today. Since sight-reading scriptio continua was difficult, reciting a text aloud with fluency required a reader to have a degree of familiarity with it. He—and it was mostly he’s—had to prepare with it, punctuate it in his mind, memorize it—in part, if not in full—because turning a string of sounds into meaning was not something you could do easily on the fly. The text had to be learned before it could be performed. After all, the way one punctuated a text written in scriptio continua could make all the difference in the world. As the historian Jocelyn Penny Small pointed out, GODISNOWHERE comes out a lot differently when rendered as GOD IS NOW HERE versus GOD IS NOWHERE.
What’s more, a scroll written in scriptio continua had to be read top to bottom if anything was to be taken from it. A scroll has just a single point of entry, the first word. Because it has to be unwound to be read, and because there are no punctuation marks or paragraphs to break up the text—to say nothing of page numbers, a table of contents, chapter divisions, and an index—it is impossible to find a specific piece of information without scanning the whole thing, head to toe. It is not a text that can be easily consulted—until it is memorized. This is a key point. Ancient texts couldn’t be readily scanned. You couldn’t pull a scroll off the shelf and quickly find a specific excerpt unless you had some baseline familiarity with the entire text. The scroll existed not to hold its contents externally, but rather to help its reader navigate its contents internally.
One of the last places where this tradition of recitation still survives is in the reading of the Torah, an ancient handwritten scroll that can take upward of a year to inscribe. The Torah is written without vowels or punctuation (though it does have spaces, an innovation that came to Hebrew before Greek), which means it’s extremely difficult to sight-read. Though Jews are specifically commanded not to recite the Torah from memory, there’s no way to read a section of the Torah without having invested a lot of time familiarizing yourself with it, as any oncebar-mitzvahed boy can tell you. I can personally vouch for this. On the day I became a man, I was really just a parrot in a yarmulke.
Though years of language use condition us not to notice, scriptio continua has more in common with the way we actually speak than the artificial word divisions on this page. Spoken sentences flow together seamlessly as one long, blurry drawn-out sound. We don’t speak with spaces. Where one word ends and another begins is a relatively arbitrary linguistic convention. If you look at a sonographic chart visualizing the sound waves of someone speaking English, it’s practically impossible to tell where the spaces are, which is one of the reasons why it’s proven so difficult to train computers to recognize speech. Without sophisticated artificial intelligence capable of figuring out context, a computer has no way of telling the difference between “The stuffy nose may dim liquor” and “The stuff he knows made him lick her.”
For a period, Latin scribes actually did try separating words with dots, but in the second century A.D., there was a reversion—a giant and very curious step backward, it would seem—to the old continuous script used by the Greeks. Spaces weren’t seen again in Western writing for another nine hundred years. From our vantage point today, separating words seems like a no-brainer. But the fact that it was tried and rejected says a lot about how people used to read. So, too, does the fact that the ancient Greek word most commonly used to signify “to read” was ánagignósko, which means to “know again,” or “to recollect.” Reading as an act of remembering: From our modern vantage point, could there be a more unfamiliar relationship between reader and text?
Today, when we live amid a deluge of printed words—would you believe that ten billion volumes were printed last year?—it’s hard to imagine what it must have been like to read in the age before Gutenberg, when a book was a rare and costly handwritten object that could take a scribe months of labor to produce. Even as late as the fifteenth century, there might be just several dozen copies of any given text in existence, and those copies were quite probably chained to a desk or lectern in some university library, which, if it contained a hundred other books, would have been considered particularly well stocked. If you were a medieval scholar reading a book, you knew that there was a reasonable likelihood you’d never see that particular text again, and so a high premium was placed on remembering what you read. You couldn’t just pull a book off the shelf to consult it for a quote or an idea. For one thing, modern bookshelves with their rows of outwardfacing spines hadn’t even been invented yet. That didn’t happen until sometime around the sixteenth century. For another thing, books still tended to be heavy, hardly portable objects. It was only in the thirteenth century that bookmaking technology advanced to the point that the Bible could be compiled in a single volume rather than a collection of independent books, and yet it still weighed more than ten pounds. And even if you did happen to have a text you needed close at hand, the chances of finding what you were looking for without reading the whole thing start to finish were slim. Indexes were not yet common, nor were page numbers or tables of contents.
But these gaps were gradually filled. And as the book itself changed, so too did the crucial role of memory in reading. By about the year 400, the parchment codex, with its leaves of pages bound at the spine like a modern hardcover, had all but completely replaced scrolls as the preferred way to read. No longer did a reader have to unfurl a long document to find a passage. A reader could just turn to the appropriate page.
The first concordance of the Bible, a grand index that consumed the labors of five hundred Parisian monks, was compiled in the thirteenth century, around the same time that chapter divisions were introduced. For the first time, a reader could refer to the Bible without having previously memorized it. One could find a passage without knowing it by heart or reading the text all the way through. Soon after the concordance, other books with alphabetical indexes, page numbers, and tables of contents began to appear, and as they did, they again helped change the essence of what a book was.
The problem of the book before the index and table of contents is that for all the material contained in a scroll or between the covers of a book, it was impossible to navigate. What makes the brain such an incredible tool is not just the sheer volume of information it contains but the ease and efficiency with which it can find that information. It uses the greatest random-access indexing system ever invented—one that computer scientists haven’t come even close to replicating. Whereas an index in the back of a book provides a single address—a page number—for each important subject, each subject in the brain has hundreds if not thousands of addresses. Our internal memories are associational, nonlinear. You don’t need to know where a particular memory is stored in order to find it. It simply turns up—or doesn’t—when you need it. Because of the dense network that interconnects our memories, we can skip around from memory to memory and idea to idea very rapidly. From Barry White to the color white to milk to the Milky Way is a long voyage conceptually, but a short jaunt neurologically.
Indexes were a major advance because they allowed books to be accessed in the nonlinear way we access our internal memories. They helped turn the book into something like a modern CD, where you can skip directly to the track you want, as compared to unindexed books, which, like cassette tapes, force you to troll laboriously through large swaths of material in order to find the bit you’re looking for. Along with page numbers and tables of contents, the index changed what a book was, and what it could do for scholars. The historian Ivan Illich has argued that this represented an invention of such magnitude that “it seems reasonable to speak of the pre- and post-index Middle Ages.” As books became easier and easier to consult, the imperative to hold their contents in memory became less and less relevant, and the very notion of what it meant to be erudite began to evolve from possessing information internally to knowing where to find information in the labyrinthine world of external memory.
To our memory-bound predecessors, the goal of training one’s memory was not to become a “living book,” but rather a “living concordance,” a walking index of everything one had read, and all the information one had acquired. It was about more than merely possessing an internal library of facts, quotes, and ideas; it was about building an organizational scheme for accessing them. Consider, for example, Peter of Ravenna, a leading fifteenth-century Italian jurist (also, one gets the impression, one of the fifteenth century’s leading self-promoters) who authored one of the era’s most successful books on memory training. Titled Phoenix, it was translated into several languages and reprinted all across Europe. It was just the most famous of a handful of memory treatises created from the thirteenth century onward that helped make memory techniques that had long been the exclusive purview of scholars and monks available to a wider audience of doctors, lawyers, tradesmen, and everyday folks who just wanted to remember stuff. One finds books from the period on every variety of mnemonic subject, including how to use the art of memory in gambling, how to use it to keep track of debts, how to memorize the contents of ships, how to remember the names of acquaintances, and how to memorize playing cards. Peter, for his part, bragged of having memorized twenty thousand legal points, a thousand texts by Ovid, two hundred of Cicero’s speeches and sayings, three hundred sayings of philosophers, seven thousand texts from Scripture, as well as a host of other classical works.
For leisure, he would reread books cached away in his many memory palaces. “When I left my country to visit as a pilgrim the cities of Italy, I can truly say I carried everything I owned with me,” he wrote. To store all those images, Peter started with a hundred thousand loci, but he was always picking up new memory palaces on his travels across Europe. He constructed a mental library of sources and quotations on every important subject, classified alphabetically. He boasts, for example, that filed away in his brain under the letter A were sources on the subjects “de alimentis, de alienatione, de absentia, de arbitris, de appellationibus, et de similibus quae jure nostro habentur incipientibus in dicta littera A”—“about provisions, about foreign property, about absence, about judges, about appeals, and about similar matters in our law which begin with the letter A.” Each piece of knowledge was assigned a specific address. When he wished to expound on a given topic, he simply reached into the proper chamber of the proper memory palace and pulled out the proper source.
When the point of reading is, as it was for Peter of Ravenna, remembering, you approach a text very differently than most of us do today. Now we put a premium on reading quickly and widely, and that breeds a kind of superficiality in our reading, and in what we seek to get out of books. You can’t read a page a minute, the rate at which you’re probably reading this book, and expect to remember what you’ve read for any considerable length of time. If something is going to be made memorable, it has to be dwelled upon, repeated.
In his essay “The First Steps Toward a History of Reading,” Robert Darnton describes a switch from “intensive” to “extensive” reading that occurred as books began to proliferate. Until relatively recently, people read “intensively,” says Darnton. “They had only a few books—the Bible, an almanac, a devotional work or two—and they read them over and over again, usually aloud and in groups, so that a narrow range of traditional literature became deeply impressed on their consciousness.”
But after the printing press appeared around 1440, things began gradually to change. In the first century after Gutenberg, the number of books in existence increased fourteenfold. It became possible, for the first time, for people without great wealth to have a small library in their own homes, and a trove of easily consulted external memories close at hand.
Today, we read books “extensively,” without much in the way of sustained focus, and, with rare exceptions, we read each book only once. We value quantity of reading over quality of reading. We have no choice, if we want to keep up with the broader culture. Even in the most highly specialized fields, it can be a Sisyphean task to try to stay on top of the ever-growing mountain of words loosed upon the world each day.
Few of us make any serious effort to remember what we read. When I read a book, what do I hope will stay with me a year later? If it’s a work of nonfiction, the thesis, maybe, if the book has one. A few savory details, perhaps. If it’s fiction, the broadest outline of the plot, something about the main characters (at least their names), and an overall critical judgment about the book. Even these are likely to fade. Looking up at my shelves, at the books that have drained so many of my waking hours, is always a dispiriting experience. One Hundred Years of Solitude: I remember magical realism and that I enjoyed it. But that’s about it. I don’t even recall when I read it. About Wuthering Heights I remember exactly two things: that I read it in a high school English class and that there was a character named Heathcliff. I couldn’t say whether I liked the book or not.
I don’t think I’m an exceptionally bad reader. I suspect that many people, maybe even most, are like me. We read and read and read, and we forget and forget and forget. So why do we bother? Michel de Montaigne expressed the dilemma of extensive reading in the sixteenth century: “I leaf through books, I do not study them,” he wrote. “What I retain of them is something I no longer recognize as anyone else’s. It is only the material from which my judgment has profited, and the thoughts and ideas with which it has become imbued; the author, the place, the words, and other circumstances, I immediately forget.” He goes on to explain how “to compensate a little for the treachery and weakness of my memory,” he adopted the habit of writing in the back of every book a short critical judgment, so as to have at least some general idea of what the tome was about and what he thought of it.
You might think that the advent of printing, and the ability to more easily offload memories from brains onto paper, would have immediately rendered the old memory techniques irrelevant. But that’s not what happened. At least not right away. In fact, paradoxically, at exactly the moment when a neat rendering of history would suggest that the art of memory should have been on its way to obsolescence, it underwent its greatest renaissance.
Ever since Simonides, the art of memory had been about creating architectural spaces in the imagination. But in the sixteenth century, an Italian philosopher and alchemist named Giulio Camillo—known as “Divine Camillo” to his many admirers and “the Quack” to his many detractors—had the clever idea of making concrete what had for the previous two thousand years always been an ethereal idea. It occurred to him that the system would work a whole lot better if someone transformed the metaphor of the memory palace into a real wooden building. He imagined creating a “Theater of Memory” that would serve as a universal library containing all the knowledge of mankind. It may sound like the premise of a Borges story, but it was a very real project, with very real backers, and it made Camillo into one of the most famous men in all of Europe. King Francis I of France made Camillo promise that the secrets of his theater would never be revealed to anyone but him, and invested five hundred ducats toward its completion.
Camillo’s wooden memory palace was shaped like a Roman amphitheater, but instead of the spectator sitting in the seats looking down on the stage, he stood in the center and looked up at a round, seven-tiered edifice. All around the theater were paintings of Kabbalistic and mythological figures as well as endless rows of drawers and boxes filled with cards, on which were printed everything that was known, and—it was claimed—everything that was knowable, including quotations from all the great authors, categorized according to subject. All you had to do was meditate on an emblematic image and the entirety of knowledge stored in that section of the theater would be called immediately to mind, allowing you to “be able to discourse on any subject no less fluently than Cicero.” Camillo promised that “by means of the doctrine of loci and images, we can hold in the mind and master all human concepts and all the things that are in the entire world.”
That was a grand claim, and with hindsight, sure, it sounds like hocus-pocus. But Camillo was convinced that there existed a set of magical symbols that could organically represent the entire cosmos. Just as the image of the she-male represented the concept of e-mailing in that first memory palace I built to house Ed’s to-do list, Camillo believed there were images that could encapsulate vast and powerful concepts about the universe, and simply by memorizing those images, one would be able understand the hidden connections underlying everything.
A scale wooden model of Camillo’s theater was exhibited in Venice and Paris, and hundreds—perhaps thousands—of cards were drafted to fill the theater’s boxes and drawers. The artists Titian and Salviati were enlisted to paint the theater’s symbolic imagery. However, that seems to be about as far as things got. The theater was never actually completed, and all that remains of the grand scheme is a short, posthumously published manifesto, The Idea of the Theater, dictated on his deathbed over the course of a week. Written in the future tense without any images or diagrams, it is, to put it mildly, a confusing book.
Though history had largely forgotten the man who promised the ultimate technology for remembering—“divine” lost out to “quack” in almost every assessment—Camillo’s reputation was resurrected in the twentieth century thanks to the efforts of the historian Frances Yates, who helped reconstruct the theater’s blueprints in her book The Art of Memory, and the Italian literature professor Lina Bolzoni, who has helped explain how Camillo’s theater was more than just the work of a nut job, but actually the apotheosis of an entire era’s ideas about memory.
The Renaissance, with its fresh translations of ancient Greek texts, brought about a renewed fascination with Plato’s old idea that there is a transcendental ideal reality of which our own world is but a pale shadow. In Camillo’s Neoplatonic vision of the universe, images in the mind were a way of accessing that ideal realm, and the art of memory was a secret key to unlocking the occult structure of the universe. Memory was transformed from a tool of rhetoric, as it had been for the ancients, or an instrument of pious meditation, as it had been for the medieval scholastic philosophers, into a purely mystical art.
Even more than Camillo, the greatest practitioner of this dark, mystical form of mnemonics was the Dominican friar Giordano Bruno. In his book On the Shadow of Ideas, published in 1582, Bruno promised that his art “will help not only the memory but also all the powers of the soul.” Memory training, for Bruno, was the key to spiritual enlightenment.
Bruno had literally come up with a new twist on the old art of memory. Drawing inspiration from the palindromically named thirteenth-century Catalan philosopher and mystic Ramon Llull, Bruno invented a device that would allow him to turn any word into a unique image. Bruno imagined a series of concentric wheels, each of which had 150 two-letter pairs around its perimeter, corresponding to all of the combinations that could be formed by the thirty letters of the alphabet (the twenty-three letters of classical Latin, plus seven Greek and Hebrew letters that didn’t have an equivalent in the Latin alphabet) and the five vowels: AA, AE, AI, AO, AU, BA, BE, BO, etc. On the innermost wheel, the 150 two-letter combinations were each paired with a different mythological or occult figure. On the perimeter of the second wheel were 150 actions and predicaments—“sailing,” “on the carpet,” “broken”—corresponding to another set of letter pairs. The third wheel consisted of 150 adjectives, the fourth wheel had 150 objects, and the fifth wheel had 150 “circumstances,” such as “dressed in pearls” or “riding a sea monster.” By properly aligning the wheels, any word up to five syllables long could be translated into a unique, vivid image. For example, the word crocitus, Latin for “croaking of a raven,” becomes an image of the Roman diety “Pilumnus advancing rapidly on the back of a donkey with a bandage on his arm and a parrot on his head.” Bruno was convinced that his opaque and divinely loopy invention was a major step forward for the arts of memory, analogous in scale, he proclaimed, to the technological leap from carving letters in trees to the printing press.
Bruno’s scheme, tinged with magic and the occult, deeply troubled the church. His unorthodox ideas, which included such heresies as a belief in Copernican heliocentrism and a conviction that Mary wasn’t really a virgin, ultimately landed him in the unforgiving arms of the Inquisition. In 1600, he was burned at the stake in the Campo dei Fiori in Rome and his ashes dispersed in the Tiber River. Today, a statue of Bruno stands in the plaza where he was immolated, a beacon to freethinkers and mental athletes the world over.
Once the Enlightenment had finally put to bed the Renaissance’s obsession with occult memory theaters and Llullian wheels, the art of memory passed into a new but no less harebrained era—the age of the “get smart quick” scheme—which to this day it hasn’t yet escaped. Over a hundred treatises on mnemonics were published in the nineteenth century, with titles like “American Mnemotechny” and “How to Remember.” They bear a conspicuous resemblance to the memory improvement books that can be found in the self-help aisle at bookstores today. The most popular of these nineteenth-century mnemonic handbooks was written by Professor Alphonse Loisette, an American “memory doctor” who, despite his prolific remembering, “had somehow forgotten that he was born Marcus Dwight Larrowe and that he had no degree,” as one article notes. The fact that I was able to find 136 used copies of Loisette’s 1886 book Physiological Memory: The Instantaneous Art of Never Forgetting selling for as little as $1.25 on the Internet is evidence of its once immense popularity.
Loisette’s book is essentially a collection of mnemonic systems for remembering sundry trivia, like the order of American presidents, the counties of Ireland, the Morse telegraphic alphabet, the British territorial regiments, and the names and uses of the nine pairs of cranial nerves. Loisette claimed his system was wholly unrelated to classical mnemonics, for which he professed disdain, and that he had discovered, entirely by himself, the “laws of natural memory.”
Loisette charged as much as twenty-five dollars (more than five hundred dollars in today’s money) to impart this knowledge to his pupils in seminars held all across the country, including classes at just about every prestigious university on the eastern seaboard. Inductees into the “Loisette System” were made to sign a contract binding them to secrecy, with a penalty of five hundred dollars (over ten thousand dollars in today’s money) should they divulge the professor’s methods. There was, it seems, good money to be made peddling secrets of memory improvement to a credulous American audience. According to the doctor’s own numbers, he earned today’s equivalent of almost a half million dollars over a single fourteen-week stretch in the winter of 1887.
In 1887, Samuel L. Clemens, better known as Mark Twain, first crossed paths with Loisette and enrolled in a memory course lasting several weeks. Twain used to say that his “memory was never loaded with anything but blank cartridges,” and had long had an interest in memory improvement. He came out of the course a deep believer in Loisette’s system. In fact, he was so taken with Loisette that he independently published a broadside claiming that ten thousand dollars an hour would be a bargain for the invaluable tricks the doctor was imparting. He would later regret this testimonial, but not until after it found its way onto virtually every piece of printed matter Loisette produced.
In 1888, G. S. Fellows, out of “that keen sense of justice and innate love of liberty, characteristic of every true American” published a book called “Loisette” Exposed that set out to clarify that “Professor” “Loisette”—yes, both appellations bore their own set of scare quotes—was both a “humbug and a fraud.” The 224-page book revealed that his methods were either ripped off and repackaged from older sources or else obscenely oversold. Surely Loisette’s humbuggery and fraudulence ought to have been self-evident to someone as versed in the ways of the world as Mark Twain, but Twain was a profligate fad chaser, and always interested in the next big thing. (His personal investment of $300,000—$7 million today—in the Paige typesetter, an early competitor of the Linotype, was only the most ruinous of several ambitious projects he poured his money into.)
Twain himself was continually experimenting with new memory techniques to aid him on the lecture circuit. At one point early in his career, he wrote the first letter of topics he planned to drop into his speech on each of his ten fingernails, but that never really worked, since audiences began to suspect him of having some sort of vain interest in his hands. During the summer of 1883, while he was writing Huckleberry Finn, Twain procrastinated by developing a game to teach his children the English monarchs. It worked by mapping out the lengths of their reigns using pegs along a road near his home. Twain was essentially turning his backyard into a memory palace. In 1885, he patented “Mark Twain’s Memory Builder: A Game for Acquiring and Retaining All Sorts of Facts and Dates.” Twain’s notebooks are filled with pages dedicated to his spatial memory game.
Twain imagined national clubs organized around his mnemonic game, regular newspaper columns, a book, and international competitions with prizes. He became convinced that the entire corpus of historical and scientific facts that any American student needed to know could be taught through his ingenious invention. “Poets, statesmen, artists, heroes, battles, plagues, cataclysms, revolutions ... the invention of the logarithm, the microscope, the steam-engine, the telegraph—anything and everything all over the world—we dumped it all in among the English pegs,” he wrote in his 1899 essay “How to Make History Dates Stick.” Unfortunately, like the Paige typesetter, the game turned out to be a financial bust, and Twain was eventually forced to abandon it. He wrote to his friend the novelist William Dean Howells, “If you haven’t ever tried to invent an indoor historical game, don’t.”
Like so many before him, Twain had gotten swept up in the promise of vanquishing forgetfulness. He had drunk of the same wacky elixir that had intoxicated Camillo and Bruno and Peter of Ravenna, and his story should probably be read as a cautionary tale to anyone embarking on a course of memory training. Perhaps, in retrospect, the resemblances between Dr. Loisette and today’s memory gurus should have sent me running for the hills. And yet they didn’t.
Twain lived in an age when the technologies for storing and retrieving external memories—paper, books, the recently invented photograph and phonograph—were still primitive compared to what we have today. He could not have foreseen how the proliferation of digital information at the beginning of the twenty-first century would hasten the pace at which our culture has become capable of externalizing its memories. With our blogs and tweets, digital cameras, and unlimited-gigabyte e-mail archives, participation in the online culture now means creating a trail of always present, ever searchable, unforgetting external memories that only grows as one ages. As more and more of our lives move online, more and more is being captured and preserved in ways that are dramatically changing the relationship between our internal and external memories. We are moving toward a future, it seems, in which we will have allencompassing external memories that record huge swaths of our daily activity.
I was convinced of this by a seventy-three-year-old computer scientist at Microsoft named Gordon Bell. Bell sees himself as the vanguard of a new movement that takes the externalization of memory to its logical extreme: a final escape from the biology of remembering.
“Each day that passes I forget more and remember less,” writes Bell in his book Total Recall: How the E-Memory Revolution Will Change Everything. “What if you could overcome this fate? What if you never had to forget anything, but had complete control over what you remembered—and when?”
For the last decade, Bell has kept a digital “surrogate memory” to supplement the one in his brain. It ensures that a record is kept of anything and everything that might be forgotten. A miniature digital camera, called a SenseCam, dangles around his neck and records every sight that passes before his eyes. A digital recorder captures every sound he hears. Every phone call placed through his landline gets taped and every piece of paper Bell reads is immediately scanned into his computer. Bell, who is completely bald, often smiling, and wears rectangular glasses and a black turtleneck, calls this process of obsessive archiving “lifelogging.”
All this obsessive recording may seem strange, but thanks to the plummeting price of digital storage, the increasing ubiquity of digital sensors, and better artificial intelligence to sort through the mess of data we’re constantly collecting, it’s becoming easier and easier to capture and remember ever more information from the world around us. We may never walk around with cameras dangling from our necks, but Bell’s vision of a future in which computers remember everything that happens to us is not nearly as absurd as it might at first sound.
Bell made his name and fortune as an early computing pioneer at the Digital Equipment Corporation in the 1960s and ’70s. (He’s been called the “Frank Lloyd Wright of computers.”) He’s an engineer by nature, which means he sees problems and tries to build solutions. With the SenseCam, he is trying to fix an elemental human problem: that we forget our lives almost as fast as we live them. But why should any memory fade when there are technological solutions that can preserve it?
In 1998, with the help of his assistant Vicki Rozyki, Bell began backfilling his lifelog by systematically scanning every document in the dozens of banker boxes he’d amassed since the 1950s. All of his old photos, engineering notebooks, and papers were digitized. Even the logos on his T-shirts couldn’t escape the scanner bed. Bell, who had always been a meticulous preservationist, figures he’s probably scanned and thrown away three quarters of all the stuff he’s ever owned. Today his lifelog takes up 170 gigabytes, and is growing at the rate of about a gigabyte each month. It includes over 100,000 e-mails, 65,000 photos, 100,000 documents, and 2,000 phone calls. It fits comfortably on a hundred-dollar hard drive.
Bell can pull off some sensational stunts with his “surrogate memory.” With his custom search engine, he can, in an instant, figure out where he was and whom he was with at any moment in time, and then, in theory, check to see what that person said. And because he’s got a photographic record of everywhere he’s ever been and everything he’s ever seen, he has no excuse for ever losing anything. His digital memory never forgets.
Photographs, videos, and digital recordings are, like books, prosthetics for our memories—chapters in the long journey that began when the Egyptian god Theuth came to King Thamus and offered him the gift of writing as a “recipe for both memory and wisdom.” Lifelogging is the logical next step. Maybe even the logical final step, a kind of reductio ad absurdum of a cultural transformation that has been slowly unfolding for millennia.
I wanted to meet Bell and see his external memory at work. His project would seem to offer the ultimate counterargument to all the effort I was investing in training my internal memory. If we’re bound to have computers that never forget, why bother having brains that remember?
When I visited his immaculate Microsoft Research office overlooking the San Francisco Bay, Bell wanted to show me how he uses his external memory to help find things that have gone missing in his internal memory. Because memories are associative, finding the odd misplaced fact is often an act of triangulation. “The other day, I was trying to find a house I had looked at online,” Bell told me, leaning back in his chair. “All I could remember about it is that I was talking to the realtor on the phone at the time.” He pulled up a time line of his life on his computer, found the phone conversation on it, and then immediately pulled up all the Web sites he was looking at while he was on the phone. “I call them information barbs,” says Bell. “All you need is to remember a hook.” The more barbs there are stored in one’s digital memory, the easier it is to find what you’re looking for.
Bell has a wealth of external memories at his fingertips. By far the biggest problem Bell faces is how to avoid the fate of Funes and S and keep from drowning in a sea of meaningless trivia. So much of remembering happens at the moment of encoding, because we only tend to remember what we pay attention to. But Bell’s lifelog pays attention to everything. “Don’t ever filter, and never throw anything away” is his motto.
“Do you ever feel burdened by the sheer volume of memories you’re collecting?” I asked him.
He scoffed at the notion. “No way. I feel this is tremendously freeing.”
The SenseCam is not a beautiful machine. It’s a black box, about the size of a pack of cigarettes, that dangles around Bell’s neck. Inconspicuous it’s not. But then again, the first computers took up entire rooms and the earliest cell phones were the size of cinder blocks. It doesn’t take much imagination to see how future versions of the SenseCam could be embedded in a pair of eyeglasses, or inconspicuously sewn into clothing, or even somehow tucked under the surface of the skin or embedded in a retina.
For now, Bell’s internal and external memories don’t mesh seamlessly. In order for him to access one of his stored external memories, he still has to find it on his computer and “re-input” it into his brain through his eyes and ears. His lifelog may be an extension of him, but it’s not yet a part of him. But is it so far-fetched to believe that at some point in the not-too-distant future the chasm between what Bell’s computer knows and what his mind knows may disappear entirely? Eventually, our brains may be connected directly and seamlessly to our lifelogs, so that our external memories will function and feel as if they are entirely internal. And of course, they will also be connected to the greatest external memory repository of all, the Internet. A surrogate memory that recalls everything and can be accessed as naturally as the memories stored in our neurons: It would be the decisive weapon in the war against forgetting.
This may sound like science fiction, but already cochlear implants can convert sound waves directly into electrical impulses and channel them into the brain stem, allowing previously deaf people to hear. In fact, they’ve already been installed in more than 200,000 human heads. And primitive cognitive implants that create a direct interface between the brain and computers have already allowed paraplegics and patients with ALS (Lou Gehrig’s disease) to control a computer cursor, a prosthetic limb, even a digital voice simply through the force of thought. These neuroprosthetics, which are still highly experimental and have been implanted in only a handful of patients, essentially wiretap the brain, and allow direct communication between man and machine. The next step is a brain-computer interface that lets the mind exchange data directly with a digital memory bank, a project that a few cutting-edge researchers are already working on, and which is bound to become a major area of research in the decades ahead.
You don’t have to be a reactionary, a fundamentalist, or a Luddite to wonder whether plugging brains into computers and seamlessly merging internal and external memory would ultimately be such a terrific idea. Today bioethicists work up sweats over such hot-potato topics as genetic engineering and neurotropic “cognitive steroids,” but these kinds of enhancements are just tweaking the dials compared with what it would mean to fully marry our internal and external memories. A smarter, taller, stronger, disease-resistant person who lives to 150 is still, in the end, just a person. But if we could give someone a perfect memory and a mind that taps directly into the entire collective knowledge of humanity, well, that’s when we might need to consider expanding our vocabulary.
But perhaps instead of thinking of these memories as externalized or off-loaded—as categorically different from memories that reside in the brain—we should view them as extensions of our internal memories. After all, even internal memories are accessible only by degrees. There are events and facts I know I know, but I don’t know how to find. Even if I can’t immediately recall where I celebrated my seventh birthday or the name of my second cousin’s wife, those facts are nevertheless lurking somewhere in my brain, waiting for the right cue to pop back into consciousness, in just the same way that all the facts in Wikipedia are lurking just a mouse click away.
We Westerners tend to think of the “self,” the elusive essence of who we are, as if it were some starkly delimited entity. Even if modern cognitive neuroscience has rejected the old Cartesian idea of a homuncular soul that resides in the pineal gland and controls the human body, most of us still believe there is a distinct “me” somewhere up there driving the bus. In fact, what we think of as “me” is almost certainly something far more diffuse and hazier than it’s comfortable to contemplate. At the least, most people assume that their self could not possibly extend beyond the boundaries of their epidermis into books, computers, a lifelog. But why should that be the case? Our memories, the essence of our selfhood, are actually bound up in a whole lot more than the neurons in our brain. At least as far back as Socrates’s diatribe against writing, our memories have always extended beyond our brains and into other storage containers. Bell’s lifelogging project simply brings that truth into focus.