講座：Deep Learning for Video Summarization基于深度學習的視頻摘要
演講者：Associate Prof. Yang Wang, University of Manitoba, Canada
With the large amount of videos available online, video summarization has become an important topic in computer vision. Given the input long video, the goal of video summarization is to produce a shorter video that contains the main content of the original video. In this talk, I will present several of our recent work on using deep learning for video summarization. First, I will present our work on fully convolutional sequence model. We formulate video summarization as a sequence labeling problem. Second, I will present our work on learning video summarization models from unpaired data. This is motivated by the fact that collecting supervised data for video summarization is expensive. To address this limitation, we propose a novel formulation to learn video summarization from unpaired data. We present an approach that learns to generate optimal video summaries using a set of raw videos (V) and a set of summary videos (S), where there exists no correspondence between V and S. We argue that this type of data is much easier to collect. Finally, I will present our work on end-to-end shot detection and video summarization.
Yang Wang is currently an associate professor in the Department of Computer Science, University of Manitoba, Canada. He received his BSc from Harbin Institute of Technology (China), MSc from University of Alberta (Canada), PhD from Simon Fraser University (Canada). Before joining UManitoba, he worked as a NSERC postdoc at the University of Illinois at Urbana-Champaign (USA). His research interest lies in computer vision and machine learning, including object recognition, human action recognition, video understanding, scene understanding, etc. He received the 2017 Falconer Emerging Researcher Rh Award in applied science and held the inaugural Faculty of Science research chair in fundamental science at the University of Manitoba.