uzushio awaodori ooboke koboke kazurabashi ningyo joruri udatsu no machinami

Program


In consideration of the time difference between Japan and France, the duration of the conference will be 4 hours a day.
That is, 08:00-12:00 CET and 15:00-19:00 JST.

The first hour of the day will be allocated for the plenary talk, which will be real-time and interactive.

QCAV2021 Conference Program (PDF: Final version (April 25th))

Day 1 : Wednesday MAY 12th
8:00-8:20(CEST), 15:00-15:20(JST) QCAV2021 Opening Session
8:20-9:20(CEST), 15:20-16:20(JST) Plenary Talk 1 [PT-1]
Photometric 3D-reconstruction
Dr. Yvain Quéau
(The French National Centre for Scientific Research (CNRS), France)

Chair: Prof. Olivier Aubreton(Université de Bourgogne, France)
9:20-9:30(CEST), 16:20-16:30(JST) Break
9:30-11:30(CEST), 16:30-18:30(JST) General Talk Session 1 [GT1-1 ― GT1-19]

Day 2 : Thursday MAY 13th
8:00-9:00(CEST), 15:00-16:00(JST) Plenary Talk 2 [PT-2]
Learning Neural Character Controllers from Videos and Motion Capture Data
Prof. Taku Komura(University of Hong Kong, Hong Kong)

Chair: Prof. Nobuyuki Umezu(Ibraki University, Japan)
9:00-11:00(CEST), 16:00-18:00(JST) General Talk Session 1 [GT2-1 ― GT2-17]

Day 3 : Friday MAY 14th
8:00-9:00(CEST), 15:00-16:00(JST) Plenary Talk 3 [PT-3]
Applications of Multispectral Filter Array Imaging
Prof. Ludovic Macaire(Université de Lille, France)

Chair: Dr. Yann Gavet(Ecole Nationale Supérieure des Mines de Saint-Etienne, France)
9:00-11:00(CEST), 16:00-18:00(JST) General Talk Session 3 [GT3-1 ― GT3-18]
11:30-11:50(CEST), 18:30-18:50(JST) QCAV2021 Award & Closing Session



Presenter Information


The presentation format for general talks will be a hybrid of oral and poster. First, camera-ready manuscripts and about 15-minute pre-recorded videos will be made available on the web a week before the conference. On the days of the conference, the accepted general talks will be divided into three days, with about 20 presentations a day.

A Zoom conference room will be allocated to each speaker. In the remaining two hours after the plenary talk, the speaker will give a brief explanation and answer questions while displaying a poster-like document on the screen. If the question and answer session is lively, we will allow an extension of up to one hour.

For more information, please access "Author information"


Plenary Speakers




Plenary Talk 1:
Photometric 3D-reconstruction
Dr. Yvain Quéau
The French National Centre for Scientific Research (CNRS), France

Abstract:
Photometric 3D-reconstruction techniques aim at inferring the geometry of a scene from one or several images, by inverting a physical model describing the image formation. This talk will present an introductory overview of the main photometric 3D-reconstruction techniques, which are shape-from-shading (single image) and photometric stereo (multiple images acquired under varying illumination). These techniques are among the top-performing computer vision approaches for estimating fine-scale geometric details, as well as photometric surface properties (e.g., reflectance). The talk will cover both theoretical aspects of the problem (well-posedness), numerical issues (solving using robust variational methods), and applications to cultural heritage and quality control.

Biography:
Yvain Quéau obtained his PhD in computer science from the University of Toulouse (France) in 2015. Since 2018 he is a CNRS researcher ("chargé de recherches") with the GREYC laboratory (University of Caen, France). His research focuses on inverse problems arising in imaging and computer vision, and their numerical solving using variational methods. He is particularly interested in inverse problems involving light-matter interactions, e.g. 3D-reconstruction by photometric stereo, shape-from-shading, reflectance estimation, tomography and polarimetry.

Plenary Talk 2:
Learning Neural Character Controllers from Videos and Motion Capture Data
Prof. Taku Komura
University of Hong Kong, Hong Kong

Abstract:
Computer games and Virtual Reality (VR) are not only entertainment, but becoming a novel way of communication, especially in the post-Covid world where reduced physical communication may remain as a new-normal. Character motion is one of the important factors in increasing the immersion of the users into the virtual world. Using neural networks for character controllers can significantly increase the scalability of the system - the controller can be trained with a large amount of motion capture data while the run-time memory can be kept low. The main challenge is in designing an architecture that can produce movements in production-quality and also manage a wide variation of motion classes and styles. In this talk, I will cover our recent development of neural network based character controllers that can be trained from both videos and motion capture data. By training our neural controller from both videos and motion capture data, the system can learn motion styles of a wider population, increasing its capabilities to produce various motion types and styles for real-time animation purposes. Using our system, a wide variation of movements such as walking, running, side stepping and backward walking can be synthesized for a wide variation of characters in real time. In the end of the talk, I will discuss the open problems and future directions of character animation.

Biography:
Taku Komura joined Hong Kong University in 2020. Before joining HKU, he worked at the University of Edinburgh (2006-2020), City University of Hong Kong (2002-2006) and RIKEN (2000-2002). He received his BSc, MSc and PhD in Information Science from University of Tokyo. His research has focused on data-driven character animation, physically-based character animation, crowd simulation, 3D modelling, cloth animation, anatomy-based modelling and robotics. Recently, his main research interests have been on the application of machine learning techniques for animation synthesis. He received the Royal Society Industry Fellowship (2014) and the Google AR/VR Research Award (2017).

Plenary Talk 3:
Applications of Multispectral Filter Array Imaging
Prof. Ludovic Macaire
Université de Lille, France

Abstract:
Multispectral cameras sample the visible and/or the infrared spectrum according to narrow spectral bands. Available technologies include snapshot multispectral cameras equipped with filter arrays that acquire raw images at video rate. Raw images require a demosaicing procedure to estimate a multispectral image with full spatio-spectral definition. We review multispectral demosaicing methods and highlight the influence of illumination on demosaicing performances. Increasing the number of spectral bands (more than three spectral bands as in color imaging) has made it an active research topic with a substantial potential of applications, such as texture and material classification, or precision farming. To perform texture analysis, local binary pattern operators extract texture descriptors from texture images. We extend these operators to multispectral texture images at the expense of increased memory and computation requirements. In order to assess classification on multispectral images, with Norwegian Colour and Visual Computing Laboratory, we have built HyTexiLa that is the first significant multispectral database of close-range textures in the visible and near infrared spectral domains. For precision farming, we use a multishot camera to acquire outdoor multispectral radiance images of vegetation. From this radiance information, we study how to estimate the spectral reflectance that is as an illumination-invariant spectral signature of each vegetation species.

Biography:
Ludovic Macaire received PhD in computer science and control from University of Lille 1 in 1992, He is presently full professor in the CRIStAL Laboratory at University of Lille. His research interests include color representation, multi-spectral image analysis applied to object recognition.