With the rapid growth of video surveillance applications and services, the amount of surveillance videos has become extremely "big" which makes human monitoring tedious and difficult. At the same time, new issues concerning privacy and security have also arised. Therefore, there exists a huge demand for smart and secure surveillance techniques which can perform monitoring in an automatic way. Firstly, the huge abundance of video surveillance data in storage gives rise to the importance of video analysis tasks such as event detection, action recognition, video summarization including person re-identification and anomaly detection. Secondly, with the rich abundance of semantics and the multimodality of data extracted from surveillance videos, it is now essential for the community to tackle new challenges, such as efficient multimodal data processing and compression. Thirdly, with the rapid shift from static singular processing to dynamic collaborative computing, it is now vital to consider distributed and multi-camera video processing on edge- and cloud-based cameras, and at the same time, offering privacy-preserving considerations to safeguard the data. This workshop aims challenge the multimedia community towards extending existing approaches or exploring brave and new ideas.
This is the 6th edition of our workshop. The first five were organized in conjunction with ICME 2019 (Shanghai, China), ICME 2020 (London, UK), ICME 2021 (Shenzhen, China), ICME 2022 (Taipei, Taiwan ROC) and ICME 2023 (Brisbane, Australia).
This workshop is intended to provide a forum for researchers and engineers to present their latest innovations and share their experiences on all aspects of design and implementation of new surveillance video analysis and processing techniques. Topics of interests include, but are not limited to:
News |
The 6th edition of this workshop has been accepted to be held at ICME 2025 in Nantes, France! Stay tune for more information. |
Important Dates |
|
|
|
|
Format Requirements |
|
|
9:30 a.m.-10:30 a.m. | Invited Keynote: Graph-based Neural Network Models for Big Surveillance: Moving Object Detection in Challenging Environments Anastasia Zakharova (Laboratoire MIA, La Rochelle University, France) |
10.30 a.m.-10:50 a.m. | Short Break |
10:50 a.m.-12:30 a.m. (max. 20 mins per talk) |
A Multi-element Anomaly Detection Approach for Complex Industrial Images Haijing Jia, Hong Yi, Hengzhi Zhang, Yupeng Zhang, Liyan Liu (RICOH Software Research Center (Beijing)) A Robust Deep Retinex Decomposition Network Leveraging a Novel Synthetic Dataset for Low-Light Image Enhancement Kaicheng Xu, An Wei, Congxuan Zhang, Zhen Chen, Peng Liu, Ke Lu (Nanchang Hangkong University, Beihang University, University of Chinese Academy of Sciences) MCA: 2D-3D Retrieval with Noisy Labels via Multi-level Adaptive Correction and Alignment Gui Zou, Chaofan Gan, Chern Hong Lim, Supavadee Aramvith, Weiyao Lin (Shanghai Jiao Tong University, Monash University, Chulalongkorn University) Efficient Face Recognition via Representative Synthetic Data Ching-Hsun Chang, Ming-Hao Lee, Yu-Hsuan Chiu, Yi-Min Liao, Sheng-Luen Chung, Gee-Sern Hsu (National Taiwan University of Science and Technology) A Methodology to Evaluate Strategies Predicting Rankings on Unseen Domains Sébastien Piérard, Adrien Deliège, Anaïs Halin, Marc Van Droogenbroeck (University of Liege) |
Abstract: This talk is devoted to graph-based models in real-world applications, especially in Moving Object Detection (MOD) that is actually a challenging problem in computer vision. We start by presenting the graph formalism as well as some recent graph-based algorithms for surface surveillance. Despite the promising results they provide, most of them represent a transductive approach that assumes the access to the training and testing data in order to evaluate. Aiming to get rid of this limitation and to be able to address real-world applications, we introduce the Graph Inductive Moving Object Segmentation method (GraphIMOS) based on a Graph Neural Network (GNN) architecture. for surface surveillance. Third, we present a graph-based method for underwater scenes. After analyzing the strategies of each method and showing their limitations, a comparative evaluation on the large scale CDnet2014 dataset as well as on the Fish4knowledge dataset is provided. Finally, we conclude with some potential future research directions. In order to validate the approach, we provide an experimental comparison with some state-of-art methods. We conclude by presenting future research directions in this field.
Please feel free to send any question or comments to:
j DOT see AT hw.ac.uk