Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
From MaRDI portal
Publication:310164
DOI10.1007/S00422-015-0666-2zbMATH Open1345.92036DBLPjournals/bc/BonaiutoA15OpenAlexW2237933813WikidataQ27321263 ScholiaQ27321263MaRDI QIDQ310164FDOQ310164
Authors: James Bonaiuto, Michael A. Arbib
Publication date: 8 September 2016
Published in: Biological Cybernetics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00422-015-0666-2
Recommendations
- Learning continuous grasp affordances by sensorimotor exploration
- Learning visuomotor transformations for gaze-control and grasping
- Schema design and implementation of the grasp-related mirror neuron system
- Relational affordance learning for task-dependent robot grasping
- From motor to sensory processing in mirror neuron computational modelling
Cites Work
- Schema design and implementation of the grasp-related mirror neuron system
- A computational model of action selection in the basal ganglia. I: A new functional anatomy
- A biocybernetic method to learn hand grasping posture
- Multiple Model-Based Reinforcement Learning
- Population Coding and Decoding in a Neural Field: A Computational Study
- Reinforcement learning with replacing eligibility traces
Cited In (2)
This page was built for publication: Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q310164)