CUED Publications database

Reinforcement learning models of aversive learning and their translation to anxiety disorders

Seymour, B and Norbury, A (2017) Reinforcement learning models of aversive learning and their translation to anxiety disorders. In: WASAD, -- to -- pp. 1283-1284..

Full text not available from this repository.

Abstract

Computational neuroscience offers a relatively new way to approach the systems neuroscience of aversive learning, in which the goal is to try to reverse-engineer learning processes and understand how the behaviour associated with them can be understood as a set of definable and quantifiable information processing operations. At the heart of this approach is the core computational model, which reflects a sort of 'source code' of punishment. If we can determine this then we have an understanding that in principle is sufficient to explain and quantify aversion in any situation, including clinical conditions such as anxiety disorder. The central idea in models of aversive learning is that punishment commands a teaching signal that optimises behaviour (i.e. minimises harm) and can be described by models from Reinforcement Learning (RL). RL describes a general algorithmic (mathematical) method for learning from experience: predicting the occurrence of inherently salient events, and learning actions to exert control over them (maximising rewards, minimising punishment). In RL, an agent learns state or action value functions, or direct action policies, through interacting with the world. These functions can be learned by computing the error between predicted and actual outcomes, and using the error to improve future predictions and actions. I will review studies that show that these models offer a compelling account of many aspects of Pavlovian and instrumental learning, yielding a basic neural architecture of motivation and decision making that can be simulated (in autonomous agents) as an effective and efficient working aversive system. However, the application of these models to clinical disorders relies on a plausible models of how the system might be abnormally structured or paramaterised in susceptible people. An increasingly popular mechanistic model of anxiety disorder is that people over-generalise across the continuum of incoming stimuli. Generalisation is well-studied for Pavlovian learning, but we don't understand whether and how it applies to instrumental behaviour i.e. avoidance. I will present a reinforcement learning model of generalisation in avoidance learning, and show how generalisation functions (over-and-above perceptual uncertainty) contribute to learned action values in behavioural and brain responses. I will also show how the parameters from this model can be used to predict trait anxiety in a large population of subjects, supporting the hypothesis that over-generalisation may be a key factor in the pathogenesis of anxiety disorder.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Subjects: UNSPECIFIED
Divisions: Div F > Computational and Biological Learning
Depositing User: Cron Job
Date Deposited: 18 Oct 2017 20:07
Last Modified: 06 Apr 2021 01:46
DOI: