NVIDIA Research Casts New Light on Scenes With AI-Powered Rendering for Physical AI Development
DiffusionRenderer introduces a neural rendering technique that can be applied to content generation and editing for creative fields — and to synthetic data generation for autonomous vehicles and robotics development.
NVIDIA Research has developed an AI light switch for videos that can turn daytime scenes into nightscapes, transform sunny afternoons to cloudy days and tone down harsh fluorescent lighting into soft, natural illumination.
Called DiffusionRenderer, it’s a new technique for neural rendering — a process that uses AI to approximate how light behaves in the real world. It brings together two traditionally distinct processes — inverse rendering and forward rendering — in a unified neural rendering engine that outperforms state-of-the-art methods.
DiffusionRenderer provides a framework for video lighting control, editing and synthetic data augmentation, making it a powerful tool for creative industries and physical AI development.
Creators in advertising, film and game development could use applications based on DiffusionRenderer to add, remove and edit lighting in real-world or AI-generated videos. Physical AI developers could use it to augment synthetic datasets with a greater diversity of lighting conditions to train models for robotics and autonomous vehicles (AVs).
DiffusionRenderer is one of over 60 NVIDIA papers accepted to the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 11-15 in Nashville, Tennessee.