WIDE LEARNING AS A NEW TOOL FOR CREATING SEMANTIC NETWORKS IN LONG-TERM MEMORY
Project description
The goal of my PhD is to demonstrate how semantic networks form in long-term memory and how they are used in natural language. Many memory models describe information as organized through semantic associations between concepts, forming semantic networks. However, learning and memory studies often rely on paradigms like list memorization, which do not explain how these networks are formed in memory.
In my research, I developed a Wide Learning paradigm designed to create detailed long-term representations by exposing participants to a broad range of information and diverse aspects of the studied material. This approach successfully generated detailed and enduring representations for individual objects as well as for novel categories. Moreover, I demonstrated that once semantic networks were formed in memory, participants could access and use them in natural language, reflecting the structure created in their memory.
To examine the formation and use of these semantic networks, I use two complementary assessments within the same framework. Participants rate the semantic proximity between studied objects throughout the learning stages, with these ratings analyzed using semantic network models. Additionally, they write short free texts describing the use of these objects before and after learning, which are analyzed using natural language processing models. By examining the correlations between the ratings and the texts, I show that participants’ subjective ratings are reflected in the language they use.
In my current study, I directly compare semantic networks formed through wide learning with those created through memorization. I predict that while both approaches will produce semantic networks, those formed via wide learning will be more detailed, more accessible, and enable more flexible use in natural language compared to repetition-based learning. Furthermore, I revisit participants six weeks after the learning phase to evaluate knowledge retention. My prediction is that the more detailed semantic networks formed through wide learning will remain more stable over time and support better performance on knowledge tests.
About Me
I hold an M.A. in Cognitive Psychology and a B.A. in Psychology and Philosophy, both from Tel Aviv University. I am passionate about understanding how new information is organized in memory through different types of learning.
In my master’s thesis, I demonstrated that wide learning creates long-term representations for unfamiliar objects. I also showed that semantic knowledge interacts with visual memory, even when learning does not explicitly focus on this aspect.
During my PhD, I aim to deepen my understanding of the processes underlying the formation of semantic networks and explore the advantages of wide learning compared to other learning paradigms.