Aladine Chetouani, Assistant Professor, PRISME, University of Orleans, France
Leida Li, Professor, School of Artificial Intelligence, Xidian University, Xi'an 710071, China
Patrick Le Callet Professor, Image Perception & Interaction (IPI) LS2N, Polytech Nantes/Université de Nantes, France
Aim & topics:
In the last decades, visual quality assessment becomes a crucial step for several applications such as compression, restoration, printing, biometrics, recognition and so on. Indeed, images, video and 3D meshes are generally affected by different types of degradations that impact the performance of the treatments applied to the image. It’s thus necessary to dispose to an efficient tool that well predicts the quality of the visual content.
Three main approaches have been proposed in the literature. Full-Reference approach where the original image is exploited, No-Reference approach where only the degraded image is used and the Reduced-Reference approach where some characteristics of the original image are used. Some of them are mathematical-based, while some others are perceptual-based, structural-based or learning-based. Recently, authors are increasingly focusing on the use of deep learning (CNN, auto-encoder, etc.), since the latter demonstrated its effectiveness.
The objective of this special session is to provide an overview of this domain and some of its applications and tools.
The addressed topics in this special session are fully related to the IPTA 2019 conference with the following themes (not exhaustive list):
- Image and video quality assessment
- 3D meshes quality assessment
- Deep-based methods
- Quality for stereoscopic images and sequences
- Objective and subjective metrics
- Human Visual System inspired methods
- Perceptual metrics
- Saliency and visual attention
- Quality prediction methods
- Regression methods and quality
- Machine learning and quality
- Benchmarks for quality assessment
- Color and quality
For further information, please contact to:
Call for papers
A PDF version of the call for papers can be found here.