MMEE2024

Mathematical Models in Ecology and Evolution

July 15-18, 2024
Vienna, AUSTRIA

"The effect of early life experiences on learning in jumping spiders: A reinforcement learning model"

Rajendra, Dharanish

Animal behaviour is a highly state-dependent phenomenon and decision-making process. The outcome of an animal's decision depends on the state of the external environment and its internal state. The animal's decision can also change the environment over time. The reinforcement learning framework (1) is appropriate for studying and modelling the learning of state-dependent animal behaviours where the animal learns to make the best decisions that maximise the rewards gained from its actions. This learning ability itself can be shaped by the early life experiences of the animal that are important in determining their cognitive abilities and personalities (2,3). We use the hunting behaviour of jumping spiders as a case study to highlight these processes. Jumping spiders often hunt prey larger and stronger than themselves, risking retaliation by the prey, leading to injury and even death. Despite this, they have developed specialised strategies and continue to learn and adapt their strategies as they hunt when faced with a choice between multiple prey. It is this choice between prey, in addition to the hunting process, that we capture in our model. In this talk, I will present our model of learning in jumping spiders using the framework of reinforcement learning. In the model, the "brain" or "decision-making centre" of the animal is a deep neural network. This neural network takes in the state or features of the state (external and internal) as input and gives out the action it should take as output. As it experiences and learns, the weights connecting the nodes in the neural network are updated appropriately. These weights can also be frozen as the spider ages, to capture the developmental process. Using this model, I show how the early developmental experiences affect it in adulthood. References (1) Sutton and Barto; 2018 (2) Arnold and Taborsky; 2010 (3) Liedtke, Redekop, et al.; 2015

« back