The multiform motor cortical output: kinematic, predictive and response coding

Article


Sartori, L., Betti, S., Chinellato, E. and Castiello, U. 2015. The multiform motor cortical output: kinematic, predictive and response coding. Cortex. 70, pp. 169-178. https://doi.org/10.1016/j.cortex.2015.01.019
TypeArticle
TitleThe multiform motor cortical output: kinematic, predictive and response coding
AuthorsSartori, L., Betti, S., Chinellato, E. and Castiello, U.
Abstract

Observing actions performed by others entails a subliminal activation of primary motor cortex reflecting the components encoded in the observed action. One of the most debated issues concerns the role of this output: Is it a mere replica of the incoming flow of information (kinematic coding), is it oriented to anticipate the forthcoming events (predictive coding) or is it aimed at responding in a suitable fashion to the actions of others (response coding)? The aim of the present study was to disentangle the relative contribution of these three levels and unify them into an integrated view of cortical motor coding. We combined transcranial magnetic stimulation (TMS) and electromyography recordings at different timings to probe the excitability of corticospinal projections to upper and lower limb muscles of participants observing a soccer player performing: (i) a penalty kick straight in their direction and then coming to a full stop, (ii) a penalty kick straight in their direction and then continuing to run, (iii) a penalty kick to the side and then continuing to run. The results show a modulation of the observer's corticospinal excitability in different effectors at different times reflecting a multiplicity of motor coding. The internal replica of the observed action, the predictive activation, and the adaptive integration of congruent and non-congruent responses to the actions of others can coexist in a not mutually exclusive way. Such a view offers reconciliation among different (and apparently divergent) frameworks in action observation literature, and will promote a more complete and integrated understanding of recent findings on motor simulation, motor resonance and automatic imitation.

KeywordsAction observation; Motor resonance; Transcranial magnetic stimulation; Motor evoked potentials
PublisherElsevier
JournalCortex
ISSN0010-9452
Publication dates
Online10 Feb 2015
Print01 Sep 2015
Publication process dates
Deposited05 May 2016
Accepted27 Jan 2015
Output statusPublished
Accepted author manuscript
License
Copyright Statement

© 2015. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/

Additional information

Special issue: Neuro-cognitive mechanisms of social interaction

Digital Object Identifier (DOI)https://doi.org/10.1016/j.cortex.2015.01.019
LanguageEnglish
Permalink -

https://repository.mdx.ac.uk/item/86617

Download files


Accepted author manuscript
  • 28
    total views
  • 18
    total downloads
  • 0
    views this month
  • 1
    downloads this month

Export as

Related outputs

Affective visuomotor interaction: a functional model for socially competent robot grasping
Chinellato, E., Ferretti, G. and Irving, L. 2019. Affective visuomotor interaction: a functional model for socially competent robot grasping. Martinez-Hernandez, U., Vouloutsi, V., Mura, A., Mangan, M., Minoru, A., Prescott, T. and Verschure, P. (ed.) 8th International Conference, Living Machines 2019. Nara, Japan 09 - 12 Jul 2019 Springer, Cham. pp. 51-62 https://doi.org/10.1007/978-3-030-24741-6_5
The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al.
Chinellato, E. 2019. The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al. Physics of life reviews. https://doi.org/10.1016/j.plrev.2019.02.003
Advances in human-computer interactions: methods, algorithms, and applications
Solari, F., Chessa, M., Chinellato, E. and Bresciani, J. 2018. Advances in human-computer interactions: methods, algorithms, and applications. Computational Intelligence and Neuroscience. 2018. https://doi.org/10.1155/2018/4127475
Feature space analysis for human activity recognition in smart environments
Chinellato, E., Hogg, D. and Cohn, A. 2016. Feature space analysis for human activity recognition in smart environments. 12th International Conference on Intelligent Environments (IE). London, United Kingdom 14 - 16 Sep 2016 Institute of Electrical and Electronics Engineers (IEEE). pp. 194-197 https://doi.org/10.1109/IE.2016.43
Sensorial computing
Varsani, P., Moseley, R., Jones, S., James-Reynolds, C., Chinellato, E. and Augusto, J. 2018. Sensorial computing. in: Filimowicz, M. and Tzankova, V. (ed.) New Directions in Third Wave Human-Computer Interaction: Volume 1 - Technologies Cham, Switzerland Springer. pp. 265-284
The STRANDS project: long-term autonomy in everyday environments
Hawes, N., Burbridge, C., Jovan, F., Kunze, L., Lacerda, B., Mudrova, L., Young, J., Wyatt, J., Hebesberger, D., Kortner, T., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Beyer, L., Hermans, A., Leibe, B., Aldoma, A., Faulhammer, T., Zillich, M., Vincze, M., Chinellato, E., Al-Omari, M., Duckworth, P., Gatsoulis, Y., Hogg, D., Cohn, A., Dondrup, C., Pulido Fentanes, J., Krajnik, T., Santos, J., Duckett, T. and Hanheide, M. 2017. The STRANDS project: long-term autonomy in everyday environments. IEEE Robotics & Automation Magazine. 24 (3), pp. 146-156. https://doi.org/10.1109/MRA.2016.2636359
Decoding information for grasping from the macaque dorsomedial visual stream
Filippini, M., Breveglieri, R., Akhras, M., Bosco, A., Chinellato, E. and Fattori, P. 2017. Decoding information for grasping from the macaque dorsomedial visual stream. The Journal of Neuroscience. 37 (16), pp. 4311-4322. https://doi.org/10.1523/JNEUROSCI.3077-16.2017
An incremental von mises mixture framework for modelling human activity streaming data
Chinellato, E., Mardia, K., Hogg, D. and Cohn, A. 2017. An incremental von mises mixture framework for modelling human activity streaming data. International Work-Conference on Time Series Analysis (ITISE 2017). Granada, Spain 18 - 20 Sep 2017 pp. 379-389
Adaptive saccade controller inspired by the primates' cerebellum
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Adaptive saccade controller inspired by the primates' cerebellum. IEEE International Conference on Robotics and Automation (ICRA). Seattle, Washington, USA 26 - 30 May 2015 Institute of Electrical and Electronics Engineers (IEEE). pp. 5048-5053 https://doi.org/10.1109/ICRA.2015.7139901
Motor interference in interactive contexts
Chinellato, E., Castiello, U. and Sartori, L. 2015. Motor interference in interactive contexts. Frontiers in Psychology. 6. https://doi.org/10.3389/fpsyg.2015.00791
The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration
Chinellato, E. and Del Pobil, A. 2016. The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration. Springer.
Unsupervised grounding of textual descriptions of object features and actions in video
Alomari, M., Chinellato, E., Gatsoulis, Y., Hogg, D. and Cohn, A. 2016. Unsupervised grounding of textual descriptions of object features and actions in video. 15th International Conference Principles of Knowledge Representation and Reasoning (KR 2016). Cape Town, South Africa 25 - 29 Apr 2016 Association for the Advancement of Artificial Intelligence (AAAI). pp. 505-508
Learning the visual–oculomotor transformation: effects on saccade control and space representation
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Learning the visual–oculomotor transformation: effects on saccade control and space representation. Robotics and Autonomous Systems. 71, pp. 13-22. https://doi.org/10.1016/j.robot.2014.11.018
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
Antonelli, M., Gibaldi, A., Beuth, F., Duran, A., Canessa, A., Chessa, M., Solari, F., Del Pobil, A., Hamker, F., Chinellato, E. and Sabatini, S. 2014. A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot. IEEE Transactions on Autonomous Mental Development. 6 (4), pp. 259-273. https://doi.org/10.1109/TAMD.2014.2332875