By Petar Raykov, Psychology Research Fellow at Sussex.
I am not one to enjoy promoting myself, yet I have been in Sussex for a while now and I quite like the research I have been doing here, so here it goes. Back in the day I started my PhD with Prof. Chris Bird and Prof. Jane Oakhill. My research focused, and I guess still focuses, on how prior knowledge and experiences affect the learning and representations of new everyday information.
Ok, so let’s slightly unpack this. By new everyday information, I mean my research often uses naturalistic video stimuli that show narrative plots unfolding over time. There is a rather new trend in psychology to use such stimuli, since they may be more easily generalizable to everyday life, or indeed can be particularly useful to address psychological questions such as how do we comprehend events and discourse, and how do we integrate information over time. Notably, as most trends this one also has very much been inspired by work done previously – seminal psychological studies have investigated how we comprehend and remember text (Bartlett, 1932; Bransford & Johnson, 1972; Johnson-Laird, 1983)[https://doi.org/10.1016/S0022-5371(72)80006-9], and process events and actions from videos (Newtson et al., 1977)[https://doi.org/10.1037/0022-35220.127.116.117].
To comprehend everyday situations, we often rely on our prior knowledge. For instance, when we go into a library, we expect to see a lot of books and people studying rather than dancing. My research focused on distinguishing the effects on different types of prior knowledge. For instance, we have more general knowledge about how a typical library might work – such general or schematic knowledge has been learned over multiple experiences with libraries. However, we might also have prior specific knowledge about one particular library (e.g., what happened yesterday at the library at University of Sussex).
In different studies I tested how specific and general prior knowledge might affect learning and what brain regions might support integrating prior knowledge with new incoming information.
For instance, in one study I showed that simply knowing the previous topic of conversation can lead to improved memory for a continuation video. This study also replicated previous fMRI (brain scanning technique) results from Chris Bird’s lab (Keidel et al., 2017)[ https://doi.org/10.1093/cercor/bhx218] and further extended them by showing what brain regions are involved in integrating this specific prior knowledge with the newly incoming information. Furthermore, by using newly developed machine learning methods we were able to show that having specific prior knowledge changed the comprehension and memory for the continuation video among participants. Specifically, having prior knowledge increased the consistency of interpretation of the second half videos leading to more similar memories across participants that had the prior knowledge (Raykov et al., 2018)[ https://doi.org/10.1101/276683].
Here I will digress a bit but hopefully the reader would find the trivia about these new machine learning methods very cool (e.g. see this cool visualization https://projector.tensorflow.org/ ). There has been a recent explosion in the development of new natural language processing algorithms that allow engineers and researchers to quantify relationships between text automatically. Specifically, such analyses methods can detect topics of conversation (https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation (Heusser et al., 2021)[https://doi.org/10.1038/s41562-021-01051-6]), synonyms, quantify the similarity between sentences (https://tfhub.dev/google/universal-sentence-encoder/4), answer questions (https://en.wikipedia.org/wiki/BERT_(language_model)) and even generate and predict new text (https://en.wikipedia.org/wiki/GPT-2). These methods are particularly helpful since they could to some extend resemble the semantic memories (people’s encyclopaedic knowledge of knowing what objects and words mean) people have (see Fig. 1. and this amazing review – (Kumar, 2021)[https://doi.org/10.3758/s13423-020-01792-x]). The methods also allow psychologists to quantify semantic coherence and similarity and fit new models to address how people comprehend language or predict upcoming words (Goldstein et al., 2021; Huth et al., 2016)[ https://doi.org/10.1101/2020.12.02.403477; https://doi.org/10.1038/nature17637](see this interactive walkthrough of Huth’s results https://gallantlab.org/huth2016/).
In parallel, researchers in cognitive neuroscience and psychology have also started to adopt new machine learning methods (various decoding and encoding models) to examine how different stimulus features may predict brain activity. Indeed, this has proven very fruitful approach. Combining brain imaging with such machine learning methods can be helpful to address longstanding questions in psychology. For instance, Ediz Sohoglu who works at University of Sussex recently used encoding models to show how prediction errors affect speech perception (Sohoglu & Davis, 2020)[https://doi.org/10.7554/eLife.58077]. Warrick Roseboom and colleagues have used neural network models to build a computational model of how we might perceive time (Roseboom et al., 2019)[ https://doi.org/10.1038/s41467-018-08194-7]. Interestingly, this model has been further updated and currently also has implications for episodic memory and event processing (how we distinguish one event from another) (Fountas et al., 2021)[ https://doi.org/10.1101/2020.02.17.953133]. These methods are not only useful for neuroimaging data. Nora Andermane, Jenny Bosten, Anil Seth and Jamie Ward applied a clustering algorithm to examine individual differences in a set of perceptual tasks that measured how prior expectations affect perception (Andermane et al., 2020)[ https://doi.org/10.1016/j.concog.2020.102989]. Notably this is just a small sample of the incredible research done at Sussex University. But hopefully, it can highlight the usefulness of such methods, especially when combined with rigorous experimental design, and point out to students that sometimes coding pays off. Now to stop wasting your precious time I will stop with my digression.
In another study, we familiarised participants with one TV show and later asked them to watch and remember new video clips taken from the trained show and clips taken from a similar but untrained show. The training allowed participants to learn schematic information over multiple episodes about the main characters and their personalities. This design allowed us test how prior generic person knowledge affected the processing of new related information. I showed that person schematic knowledge helped participants remember more story specific information from the new clips taken from the trained show. We were able to identify brain patterns of activity that were specific for the person schematic knowledge. Interestingly, these patterns were present in medial prefrontal cortex, a brain region often associated with complex thought, reasoning and emotional processing. We observed that participants showing more robust evidence for these schematic patterns showed higher memory benefit for the trained videos (see Fig. 2). Furthermore, using videos and pattern recognition methods we were able to show that schema patterns were likely present throughout the whole duration of the video. These results start to elucidate how and when schema knowledge exerts its effects on new learning (Raykov et al., 2020, 2021)[https://doi.org/10.1080/02643294.2019.1685958; https://doi.org/10.1093/cercor/bhab027].
In more recent work I examined what information do we actually represent when we remember an event (Bromis et al., 2021)[https://doi.org/10.1162/jocn_a_01802]. In a collaborative project with Konstantinos Bromis we tested how repeated and unique elements are remembered. Specifically, most psychological work often examines memory for unique items or narratives, however, in our everyday lives a lot of information is actually repeated and shared among events. For instance, on Monday and Tuesday I might work in the office and see Sam on both days. So, when I am remembering an event from Tuesday, I have to differentiate it from my memories from Monday on which I also saw Sam. Apart from these repeated elements there also event unique elements, e.g. I might have had different conversations on Monday and Tuesday, furthermore I might have seen different combination of people (e.g. on Monday I saw Sam and Dominika, but on Tuesday I saw Sam and Flavia). Traditional views in memory research are that since each event is composed of unique combination of elements, we simply represent all elements equally. However, this has not been tested empirically. Furthermore, since our memories may be particularly important on how we make predictions and decisions for the future, it might be the case that we do not represent frequently and predictably occurring elements equally to event unique elements. Me and Kostas addressed exactly this question in a two-session fMRI study using conversations written by Leah Wickens (an ex-undergraduate student at University of Sussex). We asked participants to learn very well conversations that comprised of repeated and unique elements. Participants could remember the conversations and both types of elements very well. Indeed, behaviourally they rated the event unique elements as more important and remembered more from them. Yet when we examined how their brains represented these memories, we found that actually the repeated and predictable elements were more strongly represented during retrieval. This result argues against the standard view of holistic retrieval where we represent all elements equally, and start to elucidate that our memories may put more weight on repeated and predictable elements, since they may be more important in the future (Anderson & Milson, 1989; Gershman, 2017)[https://doi.org/10.1037/0033-295X.96.4.703; https://doi.org/10.1016/j.cobeha.2017.05.025]. You can see more about the research in this presentation (https://www.youtube.com/watch?v=I_cVAIKPyFE&t=0s).
More recently, I have started to investigate how prior knowledge can lead to memory errors, and biases in interpretation. Indeed, although prior knowledge often benefits learning sometimes it makes learning more difficult and can lead to false memory errors – where people falsely remember things that never happened. Furthermore, I am interested in how we learn information that contradicts our prior expectations. This new line of research is done in collaboration with Dominika Varga (you can see an interview she did posted on the Sussex Neuroscience channel – https://www.youtube.com/channel/UCMhBRvfePUb1T_1XRn9hhcA ).
Apart from how prior knowledge affects processing of new information there are still multiple questions to be addressed. For instance, it is clear that prior knowledge effects are not necessarily linear and additive, so future research is needed to better understand the conditions that affect new learning. Furthermore, recently there has been large leaps in technology (e.g. development of various new brain imaging techniques and new computational and analysis methods) that allow us to empirically test new research questions and gain better understanding on how certain psychological processes are implemented in the brain. Yet there are still multiple questions to be addressed in future research. Indeed, it is still not clear how best to examine representations that are not currently active or in working memory, which may be necessary to best understand the effects of prior knowledge. One of the biggest remaining questions is also how newly learnt information becomes integrated with our previous memories and become part of our general knowledge about how the world works.
Hopefully, you enjoyed my blabbing and found the research interesting. And who knows, you might even consider participating in brain imaging research. One cool bit of it is that you can get a picture of your brain (the Van Gogh brain in Figure 2 is actually mine) and even get a 3D print of your brain (that can glow in the dark).