AI Thrust Seminar| Computer Vision with Omnidirectional and Event Cameras
Supporting the below United Nations Sustainable Development Goals:支持以下聯合國可持續發展目標:支持以下联合国可持续发展目标:
Over the past decade, advances in deep learning have brought tremendous progress in vision-based scene perception. However, there are still many difficulties due to the limitations of the camera sensor, such as a narrow field of view, high latency, and low dynamic range. Although much algorithmic research has been done to overcome these limitations, it is still very challenging to solve the problems occurring at the sensor level.
To solve the problem more effectively, new vision sensors such as event cameras and omnidirectional cameras can be considered. Actually, event cameras have many advantages over traditional cameras, such as low latency, high temporal resolution, and high dynamic range. Omnidirectional cameras also have advantages over conventional cameras, such as a wider field of view. However, on the other hand, for utilizing these new vision sensors, the development of a new algorithm is also required in return. This talk will explain our recent works to sense and perceive the environment with event cameras and omnidirectional cameras and introduces a few applications using those cameras.
Kuk-Jin Yoon is a tenured associate professor in the Department of Mechanical Engineering at Korea Advanced Institute of Science and Technology (KAIST), leading the Visual Intelligence Laboratory. He is also an affiliated professor in the Kim Jaechul Graduate School of AI, the Cho Chun Shik Graduate School of Green Transportation, Robotics Program, and the Division of Future Vehicle at KAIST.
He received the B.S., M.S., and Ph.D. degrees in Electrical Engineering and Computer Science from KAIST in 1998, 2000, 2006, respectively. He was a post-doctoral fellow in the PERCEPTION team in INRIA-Grenoble, France, for two years from 2006 to 2008, and an assistant/associate professor in the School of Electrical Engineering and Computer Science at Gwangju Institute of Science and Technology (GIST), Korea, from 2008 to 2018. In addition, he was a technical adviser in the Visual Display Division at Samsung Electronics, the Mobility Team at NAVER Labs, and Avikus, and he is currently a technical adviser in 42dot.ai.
His research interests covers main research topics in computer vision and machine learning, such as stereo vision and 3D reconstruction, structure-from-motion, multi-object tracking, visual odometry, SLAM, semantic segmentation, event cameras- and omni-directional cameras-based vision for autonomous driving.