AIoT and DSA Seminar | Knowledge Distillation: Towards Efficient and Compact Neural Networks   

9:30am - 10:30am
Location: Room E3-202, Zoom ID: 842 2904 7286, Passcode: iott

Deep neural networks usually have a large number of parameters, which makes them powerful in representation learning, but also limits their deployment in real-world application. Knowledge distillation, which aims to transfer the knowledge from an over-parameterized teacher model to an efficient student model, has become one of the most popular method for neural network compression and acceleration. In this report, I will introduce my works on knowledge distillation in the last five years, which mainly focus on two aspects: the fundamental problems in knowledge distillation, and how to apply knowledge distillation to more challenging tasks. Besides, I will introduce the challenge and opportunities of AI model compression in the decade of large models.

Event Format
Speakers / Performers:
Linfeng Zhang
Tsinghua University

Linfeng Zhang is a Ph.D. candidate in Tsinghua University. Before that, he obtained his bachelor degree in Northeastern University. Linfeng has been awarded with Microsoft Fellowship in 2020 to encourage his work in knowledge distillation and AI model compression. He has published 13 papers in top-tier conference and journals as the first author. Besides, his research results in AI model compression have been used in many companies such as Huawei, Kwai, DIDI, Polar Bear Technology, Intel and so on.

Language
English
Recommended For
Faculty and staff
PG students
Organizer
Internet of Things Thrust, HKUST(GZ)
Artificial Intelligence Thrust, HKUST(GZ)
Data Science and Analytics Thrust, HKUST(GZ)
Post an event
Campus organizations are invited to add their events to the calendar.