Comparison between human memory and artificial intelligence memory Do language models remember like humans?
AI capabilities are constantly evolving, and large language models like GPT-4 are getting better at simulating human interaction and understanding language. But how close are these models to human memory, and do they remember information the way humans do?
In this article, we will explain the basics of how human memory works and how it stores information compared to the way language models store information to discover how similar and different they are.
Human memory: its types and mechanism of action
Human memory is divided into three main types:
1- Sensory memory:
Sensory cues are fleeting, easily faded, and respond to stimuli around a person for a short period of time, such as seeing a glimpse of something or hearing a passing sound.
2- Short-term memory:
Short-term memory helps in retaining information for a short period of time, such as remembering a certain number for a few minutes until the person writes it down somewhere. This memory is used to deal with immediate information that the person needs for temporary moments.
3- Long-term memory:
It is the main storehouse of human experiences, and in it a person retains memories and personal knowledge for many years, and sometimes for a lifetime. This memory consists of declarative memory, which stores information and events, and procedural memory, which stores skills and habits. The process of transferring memories from short-term to long - term memory is achieved through a process called memory consolidation , during which the process of strengthening and fixing important memories occurs.
Human memory is dynamic; it can change and be affected by new experiences and emotions. The process of retrieving memories can be affected by many factors, such as emotional experiences, leading to the phenomenon of “reconstructing memories” rather than remembering them exactly as they happened.
How large language models process and store information :
Large language models rely on very different ways of storing information than human memory. A language model like GPT-4 is trained on very large datasets of books, articles, and other types of written text. Through the training process, the model learns patterns in the language and becomes able to identify relationships between words and phrases.
The language model stores these patterns in its parameters, which are a set of numerical values that the model relies on to provide its responses. This means that the memory of language models is not like human memory, but rather a type of stored pattern memory that reuses learned patterns to predict words and texts based on the input.
But language models don’t remember any previous interactions with the user, and they don’t store their experiences like humans do. Whenever a new question is asked, the models provide a response based on linguistic probabilities, and the model focuses on the important parts of the input text to produce a contextually appropriate response.
Key differences between human memory and language models:
There are major differences between human memory and linguistic models, the most prominent of which are:
1- Human memory is constantly evolving; new experiences are added to it and it changes according to experiences and emotions. As for language models, they are fixed after training is complete, and they cannot adapt or update their memory without retraining them.
2- There is a big difference in how information is stored. Human memory is selective; humans usually remember important and emotional events and forget unimportant details. As for linguistic models, they retain all information without distinguishing between emotional information and ordinary information. This means that these models lack what is called (preference) that makes human memory unique.
3- Human memory has the ability to adaptively forget, which allows a person to focus on important matters and forget unnecessary details. As for linguistic models, forgetting comes due to technical limitations and not as a result of intelligent adaptation.
Similarities between human memory and language models:
Even with the profound differences between human memory and language models, there are some similarities between them, including:
Recognize patterns:
Both human memory and language models rely on pattern recognition to make sense of information. In human memory, pattern recognition is used for learning, such as understanding words or recognizing people. Large language models work in a similar way, learning patterns to predict the next word in a sequence, making them able to mimic natural interactions.
Context:
Context plays an important role in improving information retrieval. In humans, context can enhance the ability to retrieve information, such as being in the same place where a particular experience occurred, which makes it easier to recall memories associated with that place. In language models, the appropriate context of a prompt allows for an accurate response; language models rely on the context of words they have trained on that are similar to the context of the prompt to produce appropriate responses.
Conclusion:
While large language models are making great strides in simulating some aspects of human memory, such as pattern recognition and context tracking, they lack the emotional depth and flexibility of human memory. This suggests that the future of AI is not to emulate human memory exactly, but to use the strengths of language models to augment human capabilities, opening up new avenues for innovation and discovery.