Learning continuous grasp affordances by sensorimotor exploration
From MaRDI portal
Publication:3568638
Recommendations
- Relational affordance learning for task-dependent robot grasping
- Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
- Learning visual representations for interactive systems
- Learning visuomotor transformations for gaze-control and grasping
- Learning to recognize and grasp objects
Cited in
(10)- scientific article; zbMATH DE number 1977174 (Why is no real title available?)
- Learning visuomotor transformations for gaze-control and grasping
- Object-agnostic affordance categorization via unsupervised learning of graph embeddings
- 3D grasp saliency analysis via deep shape correspondence
- Relational affordance learning for task-dependent robot grasping
- Learning to Exploit Proximal Force Sensing: A Comparison Approach
- Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot
- Floating visual grasp of unknown objects using an elastic reconstruction surface
- Learning visual representations for interactive systems
- Learning to grasp and extract affordances: the integrated learning of grasps and affordances (ILGA) model
This page was built for publication: Learning continuous grasp affordances by sensorimotor exploration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3568638)