Department of Electronic & Computer Engineering - ECE Future Leaders PG Seminar Series

4:00pm - 5:00pm
Lecture Theatre-K (LT-K), Academic Building

Session 1: Multi-sensor fusion for robust and drift-free localization

Localization is an essential functionality for many spatially aware applications, such as autonomous driving, unmanned aerial vehicle navigation, and augmented reality. Among the various approaches, sensor fusion has become increasingly popular in recent years, because it brings about accurate and robust state estimation by leveraging the complementary properties of each sensor.
In this talk, I will introduce our GVINS system, which tightly fuses raw GNSS, camera, and IMU measurements to achieve robust and drif free localization. Our system greatly suppresses the drift of the visual-inertial odometry and preserves the local accuracy in spite of nois GNSS measurements. Thanks to the tightly coupled design, our system can benefit from the presence of only one satellite, whereas at least four are required for conventional GNSS estimators.

 

Session 2: Building an Autonomous Brain-Machine Interface in a Reinforcement Learning Framework

Brain-machine interfaces (BMIs) interpret dynamic neural activity into movement intention without patients’ real limb movements. In a closed-loop BMI scenario, the environment provides external information (reward or sensory feedback) to the subject, and the subject correspondingly adjusts the neural activities to control the external device to obtain the future reward. Existing BMI tasks usually are pre-defined by the experts, simple to accomplish and demand a large amount of neural data resources. The ideal BMI systems should enable the subjects to learn new tasks autonomously following their own intentions, accomplish more complicated tasks in an online framework.
First, we propose an internally rewarded reinforcement learning (RL) framework during task learning to reflect the true intention of the subject. We extract the internal representation of the reward from the medial prefrontal cortex (mPFC) and leverage it to train the RL-based decoder. Our proposed framework achieves similar performance as the externally rewarded decoder and addresses the time-variant neural patterns as they change rapidly during task learning. Then, We propose a task engagement-assisted continuous RL-based decoding framework to improve the online learning efficiency. The task engagement can be extracted from the mPFC to modulate the RL decoder training. The results show that using the task engagement helps improve the decoding performance and better reconstruct the trajectory in online continuous decoding tasks.

講者/ 表演者:
Mr. CAO Shaozu
HKUST
講者/ 表演者:
Mr. SHEN Xiang,
HKUST
語言
英文
適合對象
教職員
研究生
主辦單位
電子及計算機工程學系
新增活動
請各校內團體將活動發布至大學活動日曆。