3D-aware Generative Adversarial Networks (GANs) have made remarkable advancements in generating multi-view-consistent images and 3D geometries from collections of 2D images through neural volume rendering. However, despite these advancements, a significant challenge has emerged due to the substantial memory and computational costs associated with dense sampling in volume rendering. This limitation has compelled 3D GANs to resort to patch-based training or low-resolution rendering with post-processing super-resolution, sacrificing multiview consistency and the quality of resolved geometry.
Current 3D generative models, employing neural field and feature grid combinations with neural volume rendering, face challenges of high memory and computational costs. Approaches using low-resolution rendering compromise 3D consistency and geometry quality, while sparse representations limit scene diversity. Patch-based training enhances image quality but restricts receptive fields. Recent diffusion models address conditional tasks but require multiview images, incurring computational expenses—various geometry representations, such as radiance fields and implicit surfaces, present trade-offs. Accelerating neural volume rendering encompasses diverse methods, with our proposed scene-conditional proposal network prioritizing generalizability across scenes.
A team of researchers at NVIDIA and the University of California, San Diego, has proposed an innovative method for achieving high-fidelity geometry rendering in 3D GANs. They utilize SDF-based NeRF parametrization and employ learning-based samplers to accelerate high-resolution neural rendering. The approach incorporates a low-resolution probe, a high-resolution CNN proposal network, and robust sampling for generating detailed images. Regularizations ensure stable training and a novel technique filters predicted PDFs for improved proposal estimation. The method demonstrates state-of-the-art 3D geometric quality on FFHQ and AFHQ datasets, establishing a new benchmark for unsupervised learning of 3D shapes in 3D GANs.
Despite significant advancements in 3D geometry generation, the proposed method exhibits limitations such as potential artifacts like dents in the presence of specularities and challenges in handling transparent objects like lenses. The method’s susceptibility to frontal bias and inaccurate labels, especially in facial side views, suggests improved training strategies, potentially utilizing large-scale Internet data and advanced regularization techniques.
The work opens new possibilities for generating high-quality 3D models and synthetic data that capture in-the-wild variations and enable new applications such as conditional view synthesis. Despite commendable achievements, certain limitations, such as artifacts in specular scenarios and challenges with transparent objects, are also acknowledged. The team envisions future enhancements by incorporating advanced material formulations and surface normal regularization. Recognizing biases in facial side views, exploring diverse training datasets, and using sophisticated regularization methods are recommended.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our Telegram Channel
The post This AI Paper from NVIDIA and UC San Diego Unveils a New Breakthrough in 3D GANs: Scaling Neural Volume Rendering for Finer Geometry and View-Consistent Images appeared first on MarkTechPost.
#AIShorts #Applications #ArtificialIntelligence #ComputerVision #EditorsPick #Staff #TechNews #Technology #Uncategorized [Source: AI Techpark]