The human memory system is a profound process that encodes environmental information as physiological signals. In doing so, it must employ an encoder which manages the task of actually converting the information, which comes in as electrical signals from sensory organs, into a form that it can store, long term, and access with relative ease.
This is a profound task. Anyone who has ever worked with an audio file knows a thing or two about this task. Music comes out of a musician in a continuous, analog stream of pressure changes in the air. Technological equipment must somehow capture these pressure changes, using some sample rate (to catch changes every thouandth of a second, or 44 thousandth, since it cant catch allll of them right down to ultimately small time), and encode the changes as a digital signal, which is stored as a .wav, for example. This file, despite the losses of information that occurred when the analog signal become digital, is considered lossless. But the problem is its huge! The human brain does not have room for lossless data.
Computers have a handy solution for dealing with .wav, namely, to convert it to mp3. If you take a look at the file sizes, an mp3 is about a tenth the size of the .wav it came from, despite both having the same song and for the same amount of time. The difference lies in the concept of 'compression.'
The basic ideas emerged from a man named Claude Shannon in the middle of the century. Shannon was an electronic engineer. His master's thesis, which was published in 1937, demonstrated that Boolean algebra could be used to build computers, and has been considered the most significant master's thesis ever written (consider that when you're studying the subtle effects of protein A on the localization of protein B's less prominent cuzin, Protein Ba, in some subcompartment of the nucleus in the presence of chemical C and D at room temperature in rainbow trout because it may have something to do with cancer). But the important ideas about information came out a decade later.
Basically, Shannon defined the entropy of some amount of information to quantify the extent to which one piece of the information could not be predicted from the others. The higher the entropy, the less predictable the information. But the power of the idea emerged as a consequence: if there is redundancy in a set of information (parts can be predicted from other parts), then the information can be compressed into a less redundant form. The entropy of the information is then equal to the entropy of the information compressed into its smallest, non-redundant form.
With music, instead of encoding every piece of the music as it came in through the microphone, we can represent those parts that are similar to earlier parts (say, the same guitar note) by pointing to the earlier instance of the part, instead of actually storing it again. Naturally, it is far easier to store a 'pointer' in memory than it is to store the whole thing we're 'pointing' to, and thus the file can be compressed significantly.
If you speak to an audiofile, though, you will find that they want nothing to do with compressed music. It is too 'lossy' - it is missing information, they say. Naturally, they are right. The guitar tone that was repeated, that we compressed by pointing to the original instance, may not actually have been exactly the same each time. In fact, it probably wasn't. The subtle differences, be it in the way the note was picked, or the way it flowed out of or into other parts of the melody, are lost in the compressed version, and those with trained ears can literally hear this loss in the way the track plays - it doesn't sound real enough.
For most of us, the compressed mp3 file does just fine. It is efficient and effective in re-presenting a piece of music. What can we say about the way the brain stores information about itself and its environment? What algorithms does the brain use to compress such information? Is the compressed version sufficient?
Information comes into our brain by way of neurons (brain cells) linked to sensory organs (eyes, ears, skin, nose, tongue). It travels as electricity down neuronal wires and flows through complex neural circuits, the structure and dynamics of which 'process' the information. What this actually means is a raging mystery, and understanding this mystery sits right up front in the auditorium of science's goals.
Lets gloss over the technical neuroscience and consider how we remember things. By the time we're an adult, say, we have experienced numerous contexts and objects and processes, have assimilated them into our understanding, structured them relative to one another, assigned associations between them, grouped them under labels and classifications, regrouped them, redefined their associations, restructured them, and so on. The net result is our psychological person - the sum of our memories, valuations, expectations, and opinions. Any new information comes into the brain through an already established network comprising an individual psychological person.
So how should information form the environment at this stage of the game be remembered. Ideally, if the information is identical or particularly similar to other memories (to other experiences), then we need not store them twice, and can simply refer to the previous encoding as part of the memory of this 'new' experience. Similarly, if we expect certain things to happen, and our expectations are corroborated by the outcome of events, then perhaps we need not even remember the new event at all, simply our expectation, and the fact that our expectation is true.
Now, I hope you can see the danger in this last assertion. Indeed, this fact lies at the heart of all Human bias. We see what we expect to see, we hear what we want to hear, we feel what we want to feel. More often than we might suppose, we witness something which does not adhere to our expectations. Nevertheless, we massage into being a new memory that supposes it actually did happen according to the way we expected it to, and thereby save energy on memorizing new and unexpected information. Partial neglect breeds higher efficiency.
It seems reasonable that the brain should evolve like this - to be able to store new experiences using less energy by assimilating them into old, expected, experiences. To compress information by squeezing it into a framework, a belief system, a system of expectations. To consolidate one's perspectives and confirm one's own biases. To be right. It is easy to know you are right.
But to be wrong? Novel information is energy expensive. It requires more activity, more attention, more Being, to assimilate the new. It suggests that we were not, in fact, completely right. Suggests that there is more to learn (that there is infinite to learn), that the world is dynamic and we must be too. Suggests that stagnation can corrupt the soul. Does it suggest that we should just forget the old? Let go of memory of past and future? Be entirely open and aware of the moment, only memorizing with perfect accuracy what Is, Now?