Image stabilization task used to develop Robot-Brain Interface

The HyperScope multiphoton imaging system now has advanced imaging capabilities; the introduction of an extended wavelength lens set means you can image deeper and through thin scattering layers in in vivo samples. Learn more here.

Image stabilization task used to develop Robot-Brain Interface


n Experimental Platform to Study the Closed-loop Performance of Brain-machine Interfaces. JoVE. 49

Ejaz N., Peterson K.D., Krapp H.G. (2011).

http://www.jove.com/index/Details.stp?ID=1677, doi: 10.3791/1677

Researchers at Imperial College London have been using Scientifica's PatchStar micromanipulator and NPI's Extracellular Amplifer EXT 10-2F to create a brain-machine interface between the H1 cell in the fly visual system and a robot. The variable nature of neuronal signals poses many problems in developing reliable robot-brain interfacing. Holger Krapp and his team at Imperial developed a closed-loop image stablization task to test the reliability of different mathematical control laws fundamental to this interfacing. They used the activity of the H1 inter-neuron to control a mobile robot.

The experiment began with two computer monitors displaying moving vertical lines, in front of the fly. The robot was placed on a constantly moving turntable and the fly H1 cell's activity was used to stabilise the robot relative to the environment. The well characterised nature of the H1 neuron as responsive to mainly horizontal back-to-front motion made it an ideal target for the single cell recordings which were subsequently converted into motion. Holgar et al used the PatchStar to crucially insert the recording electrodes to record from this specific cell.

A camera also sat on the robot which sent a log of images generated as a result of relative motion between robot and turn-table, back to the computers in front of the fly. The loop is closed when control algorithms are used to convert the H1 cells signals (spikes per sec which indicates speed of pattern motion in front of the fly) into robot speed which controls the DC motors driving the robot.

This experiment tells us a great deal about the algorithms needs to optimise performance of brain-robot interfacing. It is an exciting step towards further develops in brain-robot interfacing, which could help to development free moving robots and investigate the necessary algorithms for collision avoidance research.

Comments

Contact Form