Tom Janssens
Jan 13, 20222 min
Updated: Jun 26, 2023
Every Thursday at 11.55AM CET, we publish a link to a short, bite-sized online article, video or website that we think you should see.
Last week we explained how the makers of the Matrix movie managed to represent a 3D image from different angles using a huge amount of cameras, a big green screen and some clever computing power, and thus working around the limitation that classic Structure-from-Motion techniques only support static scenes. Their clever hack required a lot of hardware, so it is not available for consumers.
Recently a paper was published that might make it possible in the future to create a similar effect with just a regular smartphone: HyperNeRF.
A NeRF (Neural Radiance Field) is a Neural network that can predict what a location in space looks like from a certain angle for a specific static 3D scene. You can find a good explanation here. Unfortunately this only works on static scenes, so once you have a dynamic scene, NeRFs fail, as do most other Structure-from-Motion techniques
HyperNeRF found a way to work around this, as they added 2 extra dimension that also capture the movement in the scene. (Hyperspace is a term to define space that has more than 3 dimensions, hence the term HyperNeRF.)
While this technique certainly has its limits currently, it looks quite promising for the future: imagine having animated 3D scans available at the click of a button on your phone: you could watch a football match from the point-of-view you prefer, or look at a movie from another point-of-view...
Make sure to check out the paper's website as well, it makes the explanation accessible and has an interactive demo that shows you what the extra 2 dimensions in hyperspace represent.