Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

Drones for 3D models of dynamic events

Reference number
Coordinator SPACEMETRIC AB
Funding from Vinnova SEK 1 374 644
Project duration May 2018 - June 2019
Status Completed
Venture Drones
Call Drones of the future - Drones for citizens and community

Purpose and goal

The Spacetime project has resulted in a software that generates real-time 4D models from two video films from two drones. By 4D model we mean a time-dependent, ever-changing 3D model, which includes not only static objects such as buildings and trees, but also moving objects and events, such as people, animals, vehicles, water, fire and smoke. The project has also resulted in a preliminary patent application, as we believe we are the first in the world to generate 3D models of moving objects from two or more moving platforms / drones.

Expected results and effects

The project has resulted in software that provides real-time 4D models from two-drone video footage. A 4D model is a time-dependent, ever-changing 3D model, with both static objects (buildings, trees ...) and moving objects / events (people, animals, vehicles, water, fire, smoke ...) The project has led to a preliminary patent application - we believe we are the first in the world to generate 3D models of moving objects from two or more moving platforms / drones. Spacetime also transforms the two videos into a 3D video that can be viewed in stereo with 3D glasses.

Planned approach and implementation

Three flights with two drones were made recording video. The Spacetime software was developed, 4D models calculated in real time with GPU programming for video decoding, noise filtering, edge extraction and image pyramid generation. Synchronised image pairs are matched / oriented for use with 3D glasses. One 3D point cloud per image pair is created / displayed in/near real time. For real-time calculation, a sparse point cloud is made as the video is played to see how image pairs should be transformed for 3D. Pausing the film, a denser point cloud is created for interactive navigation.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 20 November 2018

Reference number 2018-01756

Page statistics