Supporting the below United Nations Sustainable Development Goals:支持以下聯合國可持續發展目標:支持以下联合国可持续发展目标:
Examination Committee
Prof Cameron CAMPBELL, SOSC/HKUST (Chairperson)
Prof Bertram SHI, ECE/HKUST (Thesis Supervisor)
Prof Tetsuya YAGI, Division of Electrical, Electronic and Information Engineering, Osaka University (External Examiner)
Prof Matthew MCKAY, ECE/HKUST
Prof Weichuan YU, ECE/HKUST
Prof Richard H Y SO, IELM/HKUST
Abstract
In this thesis, we extend the framework of efficient coding, which has been used to model the development of sensory processing in isolation, to model the development of the perception-action cycle. Our extension combines sparse coding and reinforcement learning so that sensory processing and behavior co-develop to optimize a shared motivational signal: the fidelity of the neural encoding of the sensory input under resource constraints. We suggest that this general principle may form the basis for a unified and integrated explanation of many perception action loops. The extended framework is applied to cases of visual development.
First, applying this framework to a model system consisting of an active eye behaving in a time varying environment, we find that this generic principle leads to the simultaneous development of both tracking behavior and model neurons whose properties are similar to those of primary visual cortical neurons. Second, we apply the framework for the joint development of disparity and motion tuning in the visual cortex and of optokinetic and vergence eye movement behavior. This framework accounts for the importance of the development of normal vergence control and binocular vision in achieving normal monocular OKN behaviors. Because the model includes behavior, we can simulate the same perturbations as performed in past experiments, such as artificially induced strabismus. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature. Third we apply the framework to model the development of visual vestibular interaction. Our model provides an account for experimental results on how the VOR-OKR gains evolve during development. Finally, we integrate multiple sensory cues and apply the framework to a robotic system. We demonstrate that image stabilization benefits from integrating the different sensory cues. Instead of giving fixed weights to the different sensory cues, our model learns the weights automatically.