Scope and topics of the workshop
Nowadays people spend a significant amount of time on consuming various types of streaming videos such as video on demand (VoD) for movies, dramas or variety shows through Netflix, User Generated Content (UGC) through Facebook or Tiktok, or live streaming videos for social, gaming, or shopping, benefiting from the popularity of high-speed networks and intelligent terminals. Moreover, along with the evolution of hardware and the growing popularity of concepts related to the metaverse, people have much more opportunities and interests to experience immersive and interactive multimedia content. Therefore, the users have increasing demands on the Quality of Experience (QoE) of this visual multimedia, which reflects the user’s fulfillment of enjoyment or expectation to a service or an application. Enhancing the QoE of end-users becomes the ultimate goal nowadays for multimedia service providers. The scope of this workshop focuses on the QoE assessment of any visual multimedia applications both subjectively and objectively.
The topics include:
- QoE assessment on different visual multimedia applications, including VoD for movies, dramas, variety shows, UGC on social networks, live streaming videos for gaming/shopping/social, etc.
- QoE assessment for different video formats in multimedia services, including 2D, stereoscopic 3D, High Dynamic Range (HDR), Augmented Reality (AR), Virtual Reality (VR), 360, Free-Viewpoint Video(FVV), Computer-generated imagery (CGI) , etc.
- Key performance indicators (KPI) analysis for QoE.
Organizers
Jing Li
Alibaba Group, China
Xinbo Gao
Xidian University, China
Patrick Le Callet
University of Nantes, France
Zhi Li
Netflix Inc., U.S.
Wen Lu
Xidian University, China
Jiachen Yang
Tianjin University, China
Junle Wang
Tencent, China
Program Committee
Leida Li
Xidian University, China
Hantao Liu
Cardiff University, U.K.
Giuseppe Valenzis
CNRS - CentraleSupelec, France
Mai Xu
Beihang University, China
Lu Zhang
INSA de Rennes, France
Call for Papers
Nowadays people spend a significant amount of time on consuming various types of streaming videos, benefiting from the popularity of high-speed networks and intelligent terminals. Moreover, with the growing popularity of the metaverse, the advanced hardware technology, and the content types, people have increasing demands on the Quality of Experience (QoE) of this visual multimedia.
The workshop QoEVMA2022 focuses on the QoE assessment of any visual multimedia applications, including possible key performance indicators (KPI) analysis on different video formats.
The topics of interests of this workshop include but not limited to:
  • QoE for traditional image/video and stereo image/video: the new research for evaluation of traditional visual multimedia.
  • QoE for emerging immersive multimedia or QoE-driven image/video processing: quality in immersive environments (virtual/augmented/mixed realities, 360 videos, free view-point videos).
  • QoE methods QoE-driven processing for point cloud, light field, volumetric content.
  • QoE methods for other application situations: any application situation which can import the QoE, such as screen content image/video.
  • QoE methods for visual multimedia based on machine learning: research on QoE methods for any kind of visual information based on new technologies, and deep learning is encouraged.
  • QoE-driven mobile visual multimedia processing: the QoE applications on mobile situations and the new research on mobile visual multimedia processing based on QoE.
  • Submission

    The submission follows exactly the same policy with the ACM Multimedia regular paper. Please refer to the submission site (https://2022.acmmm.org/call-for-papers/) for submission policies.

    Submitted papers (.pdf format) must use the ACM Article Template: https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template.

    Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages; i.e., the reference page(s) are not counted to the page limit of 4 to 8 pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of the paper. All papers will undergo the same review process and review period.

    submission system is open now: https://openreview.net/group?id=acmmm.org/ACMMM/2022/Workshop/QoEVMA

    Please note that paper submissions must conform with the “double-blind” review policy. This means that the authors should not know the names of the reviewers of their papers, and reviewers should not know the names of the authors. Please prepare your paper in a way that preserves anonymity of the authors.

    Workshop Paper Submission July 20, 2022
    Paper Acceptance Notification August 3, 2022
    Camera Ready Version August 14, 2022
    Workshop Date October 14, 2022
    Have questions?

    Please feel free to send an email to Dr. Jing LI (jing.li.univ@gmail.com, lj225205@alibaba-inc.com) if you have any questions relating to the workshop.

    Program

    Due to the current worldwide COVID-19 pandemic, QoEVMA'22 will be a hybrid (both on-line and off-line) workshop, which will be held on 14 October.

    Time slot Session
    9:15 am - 10:00 am Keynote session : Estimating the Quality of Experience of Immersive Contents, Mylene Farias, University of Brasilia (UnB)
    10:00 am-10:45 am Session 1: Quality Assessment on 2D Images
    (15mins/presentation) Adversarial Attacks against Blind Image Quality Assessment Models, Jari Korhonen, Junyong You
    Simulating Visual Mechanisms by Sequential Spatial-Channel Attention for Image Quality Assessment,Junyong You, Jari Korhonen
    From Just Noticeable Differences to Image Quality, Ali Ak, Andreas Pastor, Patrick Le Callet
    10:45 am - 11:00 am Coffee Break
    11:00 am - 12:15 am Session 2: QoE on Immersive Multimedia
    (15mins/presentation) Impact of Content on Subjective Quality of Experience Assessment for 3D Video Services, Dawid Juszka, Zdzisław Papir
    No-Reference Quality Assessment of Stereoscopic Video Based on Deep Frequency Perception, Jiabao Wen, Jiachen Yang, Yanshuang Zhou
    On Objective and Subjective Quality of 6DoF Synthesized Live Immersive Videos, Yuan-Chun Sun, Sheng-Ming Tang, Ching-Ting Wang, Cheng-Hsin Hsu
    No-reference Point Clouds Quality Assessment using Transformer and Visual Saliency, Salima Bourbia, Ayoub Karine, Aladine Chetouani, Mohammed El hassouni, Maher Jridi
    Point cloud quality assessment using cross-correlation of deep features, Marouane Tliba, Aladine Chetouani, Giuseppe Valenzise, Frederic Dufaux
    Speakers


    Mylene Farias, Associate professor, University of Brasilia (UnB)

    Title: Estimating the Quality of Experience of Immersive Contents

    Mylene Farias received her B.Sc. degree in electrical engineering from Federal University of Pernambuco (UFPE), Brazil, in 1995 and her M.Sc. degree in electrical engineering from the State University of Campinas (UNICAMP), Brazil, in 1998. She received her Ph.D. in electrical and computer engineering from the University of California Santa Barbara (UCSB), USA, in 2004 for work in no-reference video quality metrics. Dr. Farias has worked as a research engineer at CPqD (Brazil) in video quality assessment and validation of video quality metrics. She has also worked as an intern for Philips Research Laboratories (The Netherlands) in video quality assessment of sharpness algorithms and for Intel Corporation (Phoenix, USA) developing no-reference video quality metrics. Currently, she is an Associate professor in the Department of Electrical Engineering at the University of Brasilia (UnB), where she is a member of the Graduate Program in Informatics and of the Graduate Program on Electronic Systems and Automation Engineering (PGEA). Dr. Farias is a researcher of the Digital Signal Processing Laboratory and her current interests include video quality metrics, video processing, multimedia signal processing, immersive media, and visual attention. She has formally advised 34 undergraduate students and 12 graduate students (8 Ph.D. and 9 M.Sc.) . To this date, she has published 43 scientific journals and 88 peer-reviewed conference papers. She is currently Associate Editor for IEEE Signal Processing Letters and SPIE Journal of Electronic Imaging, besides being an Area Editor for Elsevier Signal Processing Image Communication. Dr. Farias is a member of IEEE, the IEEE Signal Processing Society, ACM, and SPIE. She has served on several program committees, served as Technical Program Co-Chair for ACM NOSSDAV 2021 and QoMEX 2020, and is co-chair of the 2022 ACM MMSys. She was recently (2020) elected as a member of the MMSP technical committee of the IEEE Signal Processing Society.

    Abstract: Recent technology advancements have driven the production of plenoptic devices that capture and display visual contents, not only as texture information (as in 2D images) but also as 3D texture-geometric information. These devices represent the visual information using an approximation of the plenoptic illumination function that can describe visible objects from any point in the 3D space. Depending on the capturing device, this approximation can correspond to holograms, light fields, or PC imaging formats. Naturally, the success of immersive applications depends on the acceptability of these formats by the final user, which ultimately depends on the quality of experience. Several subjective experiments have been performed with the goal of understanding how humans perceive immersive media in 6 Degree-of-Freedom (6DoF) environments and what are the impacts of different rendering and compression techniques on the perceived visual quality. In this context, an open area of research is the design of objective methods that estimate the quality of this type of content. In this talk, I describe a set of objective methods designed to estimate the quality of immersive visual contents - an important aspect of the overall user quality of experience. The methods use different techniques, from texture operators to CNNs, to estimate quality by also taking into consideration the specificities of the different formats. Finally, I will discuss some of the exciting research challenges in the area of realistic immersive multimedia applications.