Affective visuomotor interaction: a functional model for socially competent robot grasping

Conference paper


Chinellato, E., Ferretti, G. and Irving, L. 2019. Affective visuomotor interaction: a functional model for socially competent robot grasping. Martinez-Hernandez, U., Vouloutsi, V., Mura, A., Mangan, M., Minoru, A., Prescott, T. and Verschure, P. (ed.) 8th International Conference, Living Machines 2019. Nara, Japan 09 - 12 Jul 2019 Springer, Cham. pp. 51-62 https://doi.org/10.1007/978-3-030-24741-6_5
TypeConference paper
TitleAffective visuomotor interaction: a functional model for socially competent robot grasping
AuthorsChinellato, E., Ferretti, G. and Irving, L.
Abstract

In the context of human-robot social interactions, the ability of interpreting the emotional value of objects and actions is critical if we wish robots to achieve truly meaningful interchanges with human partners. We review here the most significant findings related to reward management and values assignment in the primate brain, with particular regard to the prefrontal cortex. Based on such findings, we propose a novel model of vision-based grasping in which the context-dependent emotional value of available options (e.g. damageable or dangerous items) is taken into account when interacting with objects in the real world. The model is both biologically plausible and suitable for being applied to a robotic setup. We provide a testing framework along with implementation guidelines.

LanguageEnglish
Conference8th International Conference, Living Machines 2019
Page range51-62
EditorsMartinez-Hernandez, U., Vouloutsi, V., Mura, A., Mangan, M., Minoru, A., Prescott, T. and Verschure, P.
ISSN0302-9743
ISBN
Hardcover9783030247409
Electronic9783030247416
PublisherSpringer, Cham
Publication dates
Online06 Jul 2019
Print27 Jul 2019
Publication process dates
Deposited27 Jan 2020
Accepted15 May 2019
Output statusPublished
Accepted author manuscript
Copyright Statement

The final authenticated version is available online at https://doi.org/10.1007/978-3-030-24741-6_5.

Additional information

Paper published as:
Chinellato E., Ferretti G., Irving L. (2019) Affective Visuomotor Interaction: A Functional Model for Socially Competent Robot Grasping. In: Martinez-Hernandez U. et al. (eds) Biomimetic and Biohybrid Systems. Living Machines 2019. Lecture Notes in Computer Science, vol 11556. Springer, Cham.

Digital Object Identifier (DOI)https://doi.org/10.1007/978-3-030-24741-6_5
Book titleLiving Machines 2019. Lecture Notes in Computer Science (LNCS, vol 11556)
Permalink -

https://repository.mdx.ac.uk/item/88vz2

Download files


Accepted author manuscript
  • 22
    total views
  • 4
    total downloads
  • 0
    views this month
  • 0
    downloads this month

Export as

Related outputs

The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al.
Chinellato, E. 2019. The competitive and multi-faceted nature of neural coding in motor imagery: Comment on "Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics" by V. Mohan et al. Physics of life reviews. https://doi.org/10.1016/j.plrev.2019.02.003
Sensorial computing
Varsani, P., Moseley, R., Jones, S., James-Reynolds, C., Chinellato, E. and Augusto, J. 2018. Sensorial computing. in: Filimowicz, M. and Tzankova, V. (ed.) New Directions in Third Wave Human-Computer Interaction: Volume 1 - Technologies Springer. pp. 265-284
Advances in human-computer interactions: methods, algorithms, and applications
Solari, F., Chessa, M., Chinellato, E. and Bresciani, J. 2018. Advances in human-computer interactions: methods, algorithms, and applications. Computational Intelligence and Neuroscience. 2018. https://doi.org/10.1155/2018/4127475
Teaching statistics using dance and movement
Irving, L. 2015. Teaching statistics using dance and movement. Frontiers in Psychology. 6, pp. 1-3. https://doi.org/10.3389/fpsyg.2015.00050
The STRANDS project: long-term autonomy in everyday environments
Hawes, N., Burbridge, C., Jovan, F., Kunze, L., Lacerda, B., Mudrova, L., Young, J., Wyatt, J., Hebesberger, D., Kortner, T., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Beyer, L., Hermans, A., Leibe, B., Aldoma, A., Faulhammer, T., Zillich, M., Vincze, M., Chinellato, E., Al-Omari, M., Duckworth, P., Gatsoulis, Y., Hogg, D., Cohn, A., Dondrup, C., Pulido Fentanes, J., Krajnik, T., Santos, J., Duckett, T. and Hanheide, M. 2017. The STRANDS project: long-term autonomy in everyday environments. IEEE Robotics & Automation Magazine. 24 (3), pp. 146-156. https://doi.org/10.1109/MRA.2016.2636359
Decoding information for grasping from the macaque dorsomedial visual stream
Filippini, M., Breveglieri, R., Akhras, M., Bosco, A., Chinellato, E. and Fattori, P. 2017. Decoding information for grasping from the macaque dorsomedial visual stream. The Journal of Neuroscience. 37 (16), pp. 4311-4322. https://doi.org/10.1523/JNEUROSCI.3077-16.2017
An incremental von mises mixture framework for modelling human activity streaming data
Chinellato, E., Mardia, K., Hogg, D. and Cohn, A. 2017. An incremental von mises mixture framework for modelling human activity streaming data. International Work-Conference on Time Series Analysis (ITISE 2017). Granada, Spain 18 - 20 Sep 2017 pp. 379-389
Feature space analysis for human activity recognition in smart environments
Chinellato, E., Hogg, D. and Cohn, A. 2016. Feature space analysis for human activity recognition in smart environments. 12th International Conference on Intelligent Environments (IE). London, United Kingdom 14 - 16 Sep 2016 Institute of Electrical and Electronics Engineers (IEEE). pp. 194-197 https://doi.org/10.1109/IE.2016.43
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
Antonelli, M., Gibaldi, A., Beuth, F., Duran, A., Canessa, A., Chessa, M., Solari, F., Del Pobil, A., Hamker, F., Chinellato, E. and Sabatini, S. 2014. A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot. IEEE Transactions on Autonomous Mental Development. 6 (4), pp. 259-273. https://doi.org/10.1109/TAMD.2014.2332875
Adaptive saccade controller inspired by the primates' cerebellum
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Adaptive saccade controller inspired by the primates' cerebellum. IEEE International Conference on Robotics and Automation (ICRA). Seattle, Washington, USA 26 - 30 May 2015 Institute of Electrical and Electronics Engineers (IEEE). pp. 5048-5053 https://doi.org/10.1109/ICRA.2015.7139901
Learning the visual–oculomotor transformation: effects on saccade control and space representation
Antonelli, M., Duran, A., Chinellato, E. and Del Pobil, A. 2015. Learning the visual–oculomotor transformation: effects on saccade control and space representation. Robotics and Autonomous Systems. 71, pp. 13-22. https://doi.org/10.1016/j.robot.2014.11.018
Motor interference in interactive contexts
Chinellato, E., Castiello, U. and Sartori, L. 2015. Motor interference in interactive contexts. Frontiers in Psychology. 6. https://doi.org/10.3389/fpsyg.2015.00791
The multiform motor cortical output: kinematic, predictive and response coding
Sartori, L., Betti, S., Chinellato, E. and Castiello, U. 2015. The multiform motor cortical output: kinematic, predictive and response coding. Cortex. 70, pp. 169-178. https://doi.org/10.1016/j.cortex.2015.01.019
The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration
Chinellato, E. and Del Pobil, A. 2016. The visual neuroscience of robotic grasping: achieving sensorimotor skills through dorsal-ventral stream integration. Springer.
Unsupervised grounding of textual descriptions of object features and actions in video
Alomari, M., Chinellato, E., Gatsoulis, Y., Hogg, D. and Cohn, A. 2016. Unsupervised grounding of textual descriptions of object features and actions in video. 15th International Conference Principles of Knowledge Representation and Reasoning (KR 2016). Cape Town, South Africa 25 - 29 Apr 2016 Association for the Advancement of Artificial Intelligence (AAAI). pp. 505-508
Creativity, imagery and schizotypy: an exploration of similarities in cognitive processing
Irving, L. 2015. Creativity, imagery and schizotypy: an exploration of similarities in cognitive processing. PhD thesis Middlesex University School of Science and Technology: Psychology
The image control and recognition task: a performance-based measure of imagery control
Irving, L., Barry, R., Le Boutillier, N. and Westley, D. 2011. The image control and recognition task: a performance-based measure of imagery control. Journal of Mental Imagery. 35 (3 & 4), pp. 67-80.