Decoding information for grasping from the macaque dorsomedial visual stream

Article


Filippini, M., Breveglieri, R., Akhras, M., Bosco, A., Chinellato, E. and Fattori, P. 2017. Decoding information for grasping from the macaque dorsomedial visual stream. The Journal of Neuroscience. 37 (16), pp. 4311-4322. https://doi.org/10.1523/JNEUROSCI.3077-16.2017
TypeArticle
TitleDecoding information for grasping from the macaque dorsomedial visual stream
AuthorsFilippini, M., Breveglieri, R., Akhras, M., Bosco, A., Chinellato, E. and Fattori, P.
Abstract

Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions.

LanguageEnglish
PublisherSociety for Neuroscience
JournalThe Journal of Neuroscience
ISSN0270-6474
Electronic1529-2401
Publication dates
Online20 Mar 2017
Print19 Apr 2017
Publication process dates
Deposited08 Mar 2018
Accepted22 Feb 2017
Output statusPublished
Publisher's version
License
Copyright Statement

Copyright © 2017 the authors.
Copyright of all material published in JNeurosci remains with the authors. The authors grant the Society for Neuroscience an exclusive license to publish their work for the first 6 months. After 6 months the work becomes available to the public to copy, distribute, or display under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.

Digital Object Identifier (DOI)https://doi.org/10.1523/JNEUROSCI.3077-16.2017
Permalink -

https://repository.mdx.ac.uk/item/87897

Download files


Publisher's version
  • 18
    total views
  • 2
    total downloads
  • 1
    views this month
  • 0
    downloads this month

Export as

Related outputs

Affective visuomotor interaction: a functional model for socially competent robot grasping
Chinellato, E., Ferretti, G. and Irving, L. 2019. Affective visuomotor interaction: a functional model for socially competent robot grasping. Martinez-Hernandez, U., Vouloutsi, V., Mura, A., Mangan, M., Minoru, A., Prescott, T. and Verschure, P. (ed.) 8th International Conference, Living Machines 2019. Nara, Japan 09 - 12 Jul 2019 Springer, Cham. pp. 51-62 https://doi.org/10.1007/978-3-030-24741-6_5
The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al.
Chinellato, E. 2019. The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al. Physics of life reviews. https://doi.org/10.1016/j.plrev.2019.02.003
Sensorial computing
Varsani, P., Moseley, R., Jones, S., James-Reynolds, C., Chinellato, E. and Augusto, J. 2018. Sensorial computing. in: Filimowicz, M. and Tzankova, V. (ed.) New Directions in Third Wave Human-Computer Interaction: Volume 1 - Technologies Springer. pp. 265-284
Advances in human-computer interactions: methods, algorithms, and applications
Solari, F., Chessa, M., Chinellato, E. and Bresciani, J. 2018. Advances in human-computer interactions: methods, algorithms, and applications. Computational Intelligence and Neuroscience. 2018. https://doi.org/10.1155/2018/4127475
The STRANDS project: long-term autonomy in everyday environments
Hawes, N., Burbridge, C., Jovan, F., Kunze, L., Lacerda, B., Mudrova, L., Young, J., Wyatt, J., Hebesberger, D., Kortner, T., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Beyer, L., Hermans, A., Leibe, B., Aldoma, A., Faulhammer, T., Zillich, M., Vincze, M., Chinellato, E., Al-Omari, M., Duckworth, P., Gatsoulis, Y., Hogg, D., Cohn, A., Dondrup, C., Pulido Fentanes, J., Krajnik, T., Santos, J., Duckett, T. and Hanheide, M. 2017. The STRANDS project: long-term autonomy in everyday environments. IEEE Robotics & Automation Magazine. 24 (3), pp. 146-156. https://doi.org/10.1109/MRA.2016.2636359
An incremental von mises mixture framework for modelling human activity streaming data
Chinellato, E., Mardia, K., Hogg, D. and Cohn, A. 2017. An incremental von mises mixture framework for modelling human activity streaming data. International Work-Conference on Time Series Analysis (ITISE 2017). Granada, Spain 18 - 20 Sep 2017 pp. 379-389
Feature space analysis for human activity recognition in smart environments
Chinellato, E., Hogg, D. and Cohn, A. 2016. Feature space analysis for human activity recognition in smart environments. 12th International Conference on Intelligent Environments (IE). London, United Kingdom 14 - 16 Sep 2016 Institute of Electrical and Electronics Engineers (IEEE). pp. 194-197 https://doi.org/10.1109/IE.2016.43
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
Antonelli, M., Gibaldi, A., Beuth, F., Duran, A., Canessa, A., Chessa, M., Solari, F., Del Pobil, A., Hamker, F., Chinellato, E. and Sabatini, S. 2014. A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot. IEEE Transactions on Autonomous Mental Development. 6 (4), pp. 259-273. https://doi.org/10.1109/TAMD.2014.2332875
Adaptive saccade controller inspired by the primates' cerebellum
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Adaptive saccade controller inspired by the primates' cerebellum. IEEE International Conference on Robotics and Automation (ICRA). Seattle, Washington, USA 26 - 30 May 2015 Institute of Electrical and Electronics Engineers (IEEE). pp. 5048-5053 https://doi.org/10.1109/ICRA.2015.7139901
Learning the visual–oculomotor transformation: effects on saccade control and space representation
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Learning the visual–oculomotor transformation: effects on saccade control and space representation. Robotics and Autonomous Systems. 71, pp. 13-22. https://doi.org/10.1016/j.robot.2014.11.018
Motor interference in interactive contexts
Chinellato, E., Castiello, U. and Sartori, L. 2015. Motor interference in interactive contexts. Frontiers in Psychology. 6. https://doi.org/10.3389/fpsyg.2015.00791
The multiform motor cortical output: kinematic, predictive and response coding
Sartori, L., Betti, S., Chinellato, E. and Castiello, U. 2015. The multiform motor cortical output: kinematic, predictive and response coding. Cortex. 70, pp. 169-178. https://doi.org/10.1016/j.cortex.2015.01.019
The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration
Chinellato, E. and Del Pobil, A. 2016. The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration. Springer.
Unsupervised grounding of textual descriptions of object features and actions in video
Alomari, M., Chinellato, E., Gatsoulis, Y., Hogg, D. and Cohn, A. 2016. Unsupervised grounding of textual descriptions of object features and actions in video. 15th International Conference Principles of Knowledge Representation and Reasoning (KR 2016). Cape Town, South Africa 25 - 29 Apr 2016 Association for the Advancement of Artificial Intelligence (AAAI). pp. 505-508