Developmental Model of Robot Sensorimotor Integration
2pm
Room 4472 (Lifts 25-26), 4/F Academic Building, HKUST

Supporting the below United Nations Sustainable Development Goals:支持以下聯合國可持續發展目標:支持以下联合国可持续发展目标:

Examination Committee

Prof Xueqing ZHANG, CIVL/HKUST (Chairperson)
Prof Bertram E SHI, ECE/HKUST (Thesis Supervisor)
Prof Tetsuya YAGI, Division of Electrical, Electronic and Information Engineering, Osaka University (External Examiner)
Prof Jianan QU, ECE/HKUST
Prof Weichuan YU, ECE/HKUST
Prof Richard H Y SO, IELM/HKUST

 

Abstract

Sensorimotor integration is the coupling between the sensory and motor systems. In robotics, it traditionally relies upon manual calibration. Research in neuroscience and psychology suggests that sensorimotor integration is mediated by the central nervous system and develops starting in infancy. With the inspiration, this thesis investigates a developmental approach to robotic sensorimotor integration. Specifically, we propose algorithms that enable a robot to learn visuomotor coordination during the action-perception cycle.

First, we studied the problem of learning robot self-identification. A set of local linear predictors are trained to predict visual motion feature from a robot’s motor command. By measuring the predictability of the predictors, the robot is able to visually identify its own body in a cluttered environment with motion noise.

Second, we investigated the problem of learning visuomotor transformations. We first proposed an efficient neural population coding algorithm to improve the performance of robot head to arm coordinate transformation learning. We then extended the point-marker-based end effector representation to a planar-region-based representation, where the visuomotor transformation is learned by maximizing spatiotemporally local image consistency. Experiments demonstrate that shape of the region can adapt to the actual shape of the robot arm end effector.

Finally, we integrated robot self-body identification, structure inference and kinematic model learning into a unified framework. Raw kinematic models are represented by a set of rigid transformations and are learned by a stochastic-gradient-decent-based iterative closest point algorithm using point cloud data collected by depth sensors. The structure is segmented according to a minimum matching error criterion with spatial continuity constraints. The points corresponding to the robot body and the relationship between the joints can be inferred from the learned transformations. Our experiments show that the learned kinematic model can be used to guide the robot arm to reach a selected target.

Speakers / Performers:
Mr Tao ZHOU
Language
English