How we learn to distinguish between what is or isn’t important

The HyperScope multiphoton imaging system now has advanced imaging capabilities; the introduction of an extended wavelength lens set means you can image deeper and through thin scattering layers in in vivo samples. Learn more here.

How we learn to distinguish between what is or isn’t important


Primary Visual Cortex (V1) neurons respond with increasing distinguishability to images that either elicit a reward or have a neutral outcome, precipitating improvements in behavioural performance.

These findings, published in Neuron, were the outcome of a study by Professor Sonja Hofer and her colleagues who are investigating the question of how processing of sensory stimuli is optimised in the brain through learning.

The researchers were able to show that within one week of training mice could learn the difference between relevant and irrelevant visual information and modify their behaviours accordingly.

Professor Hofer said: "Our previously learnt knowledge, our expectations and the context we are in can have a great impact on our visual perception of the environment."

To do this, the team set up an experiment where mice ran on a treadmill through a virtual reality environment that consisted of a corridor with walls of various patterns. There was a virtual approach pattern (black and white circles) of a random length interspaced with either a vertical grid pattern or an angled grid pattern at random points. The mice were able to control their position in the corridor by running.

If the mice performed licking behaviour while moving through the vertical grid patterns they were given a reward (a drop of soya milk). If they did the same thing during the angled grid pattern they didn't receive anything.

After 6 days of performing these tasks, the mice were able to perform the appropriate behaviours with a >90% accuracy. This included licking only while the walls showed the vertical grid pattern as well as slowing down (extending their time in these areas). The mice would carry on through the angled grid pattern without slowing down or licking.

Optogenetics was used to silence V1 neurons during the task. The mice were unable to perform visual discrimination whilst these neurons were switched off, revealing their necessity.

Chronic two-photon calcium imaging on single cells of the V1 during the trials was used to observe a robust and progressive population-wide increase in neural selectivity in cortical layer 2/3 (L2/3). This was an effect of greater day-to-day stability of single cell responses and an increase in the number of selective cells.

The research also demonstrated that many internal and external signals affect the processing of visual stimuli. The mice were given a different discrimination task to carry out simultaneously to the visual task. The mice were now given a reward (a drop of soya milk) when exposed to one odour but given nothing for a second odour. The response of the nerve cells to the visual stimuli was less accurate when the mice were engaged in both tasks. The visual stimuli had become less important and were less effectively analysed by the brain.

Dr Adil Khan, one of the lead authors of the study, said: "Remarkably, the expectation of a stimulus even before it appears, and the anticipation of a reward also altered the activity of specific brain cells." Therefore, our brain is able to process the same stimulus in a different way depending on its importance.

Paper Reference

Poort J, Khan A G, Pachitariu M, Nemri A, Orsolic I, Krupic J, Bauza M, Sahani M, Keller G B, Mrsic-Flogel T D, Hofer S B Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex Neuron (2015) doi: 10.1016/j.neuron.2015.05.037

Comments

Contact Form