Readers are probably familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos of an object or scene. You need a lot of pictures to pull it off, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant obstacles due to overlapping elements and different colored glossy or reflective surfaces in each photo can also cause problems.
But NVIDIA’s new research marries artificial intelligence to photogrammetry, which developers call an instant neural radiance field (NeRF). According to NVIDIA, not only do their procedures require much less imaging, but AI is able to better deal with the pain points of traditional photogrammetry; Fill in the blanks and use reflections to create more realistic 3D scenes that reconstruct how glossy elements look in their original environment.
If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you might want to give the technique a shot right now. The post-break tutorial video will take you through the setup and some basic topics, showing how 3D reconstruction can be gradually refined in just a few minutes and then explored like a scene from a game engine. Instant-NeRF tools include camera-path keyframes for exporting animations with higher quality results than real-time previews. The technique seems to be more suitable for output viewing and animation than the 3D printing model, although both are possible.
Don’t have the latest and greatest NVIDIA Silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry – all you need is a camera and a motorized turntable.