Introduction

With the rapid growth of video surveillance applications and services, the amount of surveillance videos has become extremely "big" which makes human monitoring tedious and difficult. At the same time, new issues concerning privacy and security have also arised. Therefore, there exists a huge demand for smart and secure surveillance techniques which can perform monitoring in an automatic way. Firstly, the huge abundance of video surveillance data in storage gives rise to the importance of video analysis tasks such as event detection, action recognition, video summarization including person re-identification and anomaly detection. Secondly, with the rich abundance of semantics and the multimodality of data extracted from surveillance videos, it is now essential for the community to tackle new challenges, such as efficient multimodal data processing and compression. Thirdly, with the rapid shift from static singular processing to dynamic collaborative computing, it is now vital to consider distributed and multi-camera video processing on edge- and cloud-based cameras, and at the same time, offering privacy-preserving considerations to safeguard the data. This workshop aims challenge the multimedia community towards extending existing approaches or exploring brave and new ideas.

This is the 5th edition of our workshop. The first three were organized in conjunction with ICME 2019 (Shanghai, China), ICME 2020 (London, UK), ICME 2021 (Shenzhen, China) and ICME 2022 (Taipei, Taiwan ROC)


Scope & Topics

This workshop is intended to provide a forum for researchers and engineers to present their latest innovations and share their experiences on all aspects of design and implementation of new surveillance video analysis and processing techniques. Topics of interests include, but are not limited to:

  • Action/activity recognition, and event detection in surveillance videos
  • Object detection and tracking in surveillance videos
  • Multi-camera surveillance networks and applications
  • Surveillance scene parsing, segmentation, and analysis
  • Crowd parsing, estimation and analysis
  • Person, group or object or re-identification
  • Summarization and synopsis of surveillance videos
  • Big Data processing in large-scale surveillance systems
  • Distributed, edge and fog computing for surveillance systems
  • Data compression in surveillance systems
  • Low-resolution video analysis and processing: Recognition and object detection, restoration, denoising, enhancement, super-resolution
  • Surveillance from multiple modalities, not limited to: UAVs, satellite imagery, dash cams, wearables.

Call for Papers

Download the Call for Papers here

News
Important Dates
    Paper Submission Due Date: March 23, 2023 April 6, 2023
    Notification of Acceptance/Rejection: April 23, 2023
    Camera-Ready Due Date (firm deadline): May 1, 2023
    Workshop Date and Venue: July 10, 2023 (TBC)
Format Requirements & Templates
    Length: Papers must be no longer than 6 pages, including all text, figures, and references.
    Format: Workshop papers have the same format as regular papers. See the templates below. Submitted paper does not need to be double blind.
Submission Details
    Paper Submission Site: https://cmt3.research.microsoft.com/ICMEW2023
    (Please make sure your paper is submitted to the correct track)
    Submissions may be accompanied by up to 20 MB of supplemental material following the same guidelines as regular and special session papers.
    Review: Reviews will be handled directly by the Organizers and the Technical Program Committee (TPC).
    Presentation guarantee: As with accepted Regular and Special Session papers, accepted Workshop papers must be registered by the author deadline and presented at the conference; otherwise they will not be included in IEEE Xplore. A workshop paper is covered by a full-conference registration only.

Schedule



Organizers

img
John See
 J.See@hw.ac.uk
img
Minxian Li
 minxianli@njust.edu.cn
img
Saimunur Rahman
 saimun.rahman@data61.csiro.com

Contact

Please feel free to send any question or comments to:
j DOT see AT hw.ac.uk