Every Thursday at 11.55AM CET, we publish a link to a short, bite-sized online article, video or website that we think you should see.
This week's weekly lab link
People love to get a 3D model from physical objects; this is called photogrammetry.
There exist several techniques to create 3D scenes from images or movies. Most of them work in the same way: they find a way to derive the camera position from an image in 3D, and then they assign depth probabilities to what has been filmed.
Afterwards, a combination of the images, the camera position and the depth probabilities are use to reconstruct a 3D representation. Sometimes depth probabilities are not estimated, but measured using a camera with a depth sensor.
The create-3D-from-a-movie technique is called structure from motion (SfM).
The biggest disadvantage of SfM is that this only works with static scenes, once the filmed object has some movement, results start to get inconsistent fast. so how was the famous bullet dodging scene in the Matrix filmed?
In the bullet dodging scene, time slows down while the camera moves around Neo in a circle. As we mentioned above, SfM can't be used here because they do not support moving scenes.
So how did they do it? Using a clever hack: they used a huge array of cameras integrated in a circular green screen, and filmed with all of the cameras at the same time. Afterwards a computer was used to interpolate between the different camera captures, resulting in a scene where time freezes, and the camera kept moving around.
So In reality, the camera wasn't moving, but your view is moving from camera to camera, and intermediate images are morphed/interpolated.
But, don't take my word for it, just check the video!
In next week's #WeeklyLabLink, I'll link to a novel, promising technique that allows you to recreate 3D images from different points of view in a random point in time, thus the array of cameras would no longer be needed anymore.
Comments