Learning-based Video Super-Resolution
3:30pm
Room 1505 (Lifts 25-26), 1/F Academic Building, HKUST

Supporting the below United Nations Sustainable Development Goals:支持以下聯合國可持續發展目標:支持以下联合国可持续发展目标:

Examination Committee

Prof Huamin QU, CSE/HKUST (Chairperson)
Prof Dit-Yan YEUNG, CSE/HKUST (Thesis Supervisor)
Prof Shaojie SHEN, ECE/HKUST

 

Abstract

The goal of video super-resolution (VSR) is to predict the high-resolution (HR) video sequence from a low-resolution (LR) video sequence input. In the past several decades, many researchers have used multi-image super-resolution (MISR) methods to do VSR, which have not considered the temporal correlations between the reconstructed HR frames. And traditional MISR methods have the drawback of demanding highly accurate motion estimation, which causes these methods to be unable to well handle complex motion situations. In our work, we use bidirectional Convolutional Long Short Term Memory networks (ConvLSTM) for sequence modelling to do VSR, which can avoid motion estimation and achieve better temporal dependency modelling than the state of the art VSR method, BRCN. To further improve the super-resolution performance, we propose the adaptation framework for making use of test data’s self-information to adapt our generic model to the concrete test data. Experimental results show that our bidirectional ConvLSTM can achieve a better VSR performance in spatiotemporal domain than the state of the art VSR method, BRCN, and the adaptation framework can enhance previous VSR results further.

講者/ 表演者:
Mr Lei XIONG
語言
英文
新增活動
請各校內團體將活動發布至大學活動日曆。