Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool

Vittorio Ferrari
CVPR (2020) (to appear)
Google Scholar

Abstract

We present a neural rendering framework that maps a
voxelized scene into a high quality image. Highly-textured
objects and scene element interactions are realistically rendered by our method, despite having a rough representation
as an input. Moreover, our approach allows controllable
rendering: geometric and appearance modifications in the
input are accurately propagated to the output. The user can
move, rotate and scale an object, change its appearance and
texture or modify a light’s position and all these edits are
represented in the final rendering. We demonstrate the effectiveness of our approach by rendering scenes with varying appearance, from single color per object to complex,
high-frequency textures. We show that our re-rendering network can generate very precise and detailed images that
capture the appearance of the input scene. Our experiments
also illustrate that our approach achieves more accurate
image synthesis results compared to alternatives and can
also handle low voxel grid resolutions. Finally, we show
how our neural rendering framework can be realistically
applied to real scenes with diverse set of objects.

Research Areas