Learning continuous grasp affordances by sensorimotor exploration
From MaRDI portal
Publication:3568638
DOI10.1007/978-3-642-05181-4_19zbMATH Open1188.68297OpenAlexW1883625788MaRDI QIDQ3568638FDOQ3568638
Authors:
Publication date: 15 June 2010
Published in: Studies in Computational Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-642-05181-4_19
Recommendations
- Relational affordance learning for task-dependent robot grasping
- Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
- Learning visual representations for interactive systems
- Learning visuomotor transformations for gaze-control and grasping
- Learning to recognize and grasp objects
Cited In (9)
- Learning to Exploit Proximal Force Sensing: A Comparison Approach
- Relational affordance learning for task-dependent robot grasping
- Title not available (Why is that?)
- Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
- Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot
- Learning visuomotor transformations for gaze-control and grasping
- 3D grasp saliency analysis via deep shape correspondence
- Floating visual grasp of unknown objects using an elastic reconstruction surface
- Learning visual representations for interactive systems
This page was built for publication: Learning continuous grasp affordances by sensorimotor exploration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3568638)