Inverse Rendering Workshop (ICCV 2015)

Inverse Rendering Workshop (ICCV 2015)

Abstract and Call for Papers

When a camera measures an intensity for a pixel, this intensity depends on a variety of factors: The camera acquisition parameters, the 3D scene geometry and surface normals, material as well as the illumination environment. This process is mathematically described by the rendering equation, which is used in computer graphics to generate naturalistic images. It requires for a fixed time instant that geometry, material and illumination conditions are given. In computer vision, we want to “inverse render” a scene: Given one or multiple images of a dynamic scene we aim to recover depth, material, illumination conditions, and 3D motion. In the past, many approaches have tackled this problem in isolation - such as depth estimation with known material and known illumination conditions - or making simplifying assumptions about the world - such as estimating depth by assuming a Lambertian reflectance model. While “Inverse Rendering” is a challenging task and, in isolation, has been addressed since the early days of computer vision, starting as early as with the work of David Marr, we believe that the time is ripe to revisit the task as a whole. On one hand, provided with enough sensor input, impressive results have already been achieved. On the other hand, humans can solve this task to a remarkable extent from little visual information. Given the recent success of learning-based techniques, also in the field of physics-based computer vision, we strongly believe that the challenge of “Inverse Rendering from a few images” can begin to be addressed.

To summarize, we define the scope of this “Inverse Rendering” workshop as follows: given one or multiple images of a dynamic scene, we want to recover all (visible) physical properties of the scene, such as dense motion, depth, material, and illumination conditions. The workshop offers a meeting and discussion platform for researchers with different and diverse backgrounds, such as in computer graphics, computer vision, optimization, and machine learning. This will hopefully push the state-of-the-art in “Inverse Rendering” with respect to models, methods, and data. Paper submission to this workshop will be solicited in the areas of

  • Joint models for estimating scene properties
  • Motion and shape estimation under challenging material and/or lighting conditions
  • Illumination estimation
  • Shape-from-X in real world settings
  • Transparent and reflective scene recovery
  • Material capturing
  • Ground truth data and reference data for Inverse Rendering

Invited Speakers

Vladlen Koltun (Intel)
Reinhard Klein (University of Bonn)
Manmohan Chandraker (NEC)
Sebastian Nowozin (MSR Cambridge)
Kyros Kutulakos (University of Toronto)
Michael Black (MPI for Intelligent Systems )

Important Dates

Workshop: December, 12th 2015