Tag Archives: 3dmodels

Cloud UAV Photogrammetry Processing

Online photogrammetry processing systems such as Autodesk’s 123D Catch (which now forms part of a whole suite of software designed to do everything from 3D capturing to 3D model creation through to 3D printing) have been around for a number of years and are available on platforms including iOS, Android and Windows. But they are aimed at the amateur with limited settings and low quality results.

Recent developments in Cloud Photogrammetry Processing have brought developing technologies that can potentially save a lot of time and money in processing photographs taken on site and in the office having the abilities of commercial desktop software solutions as well as producing high quality results.

Commercial Solutions

The DroneMapper company are one of a number of ventures aimed at the processing of UAV (Unmanned Aerial Vehicle) photographs for a number of industries including archaeology. Rather than using existing photogrammetry solutions they have developed their own custom in-house photogrammetry software package.

Images or a RAR archive file can be uploaded to their server using either their web interface, FTP (File Transfer Protocol) interface or a Dropbox account. Their current processing costs are between $20 and $100 USD.

While the REDCatch company provides processing of ground based and object processing as well as UAV photographs ; costing anything from 290€ to over a 1000€

Open Source Solutions

An Open Source alternative to this is Open Drone Map, this system uses a number of previously developed SFM (Structure from motion) solutions to automate the processing of photographs into 3D models, orthophotos and Digital Elevation Models (DEM) for GIS (Geographic Information Systems) applications. Although free for non-commercial purposes a license needs to he purchased to use it in commercial circumstances.

OpenDroneMap running on Ubuntu Linux

OpenDroneMap running on Ubuntu Linux

The code can be downloaded from Github and includes detailed written instructions and YouTube videos on how to install if on Ubuntu Linux. This means that it can be simply installed on Ubuntu running on many internet hosting services.

It uses both Clustering Views for Multi-view Stereo (CMVS) and Patch-based Multi-view Stereo Software
developed by Yasutaka Furukawa and Jean Ponce; as well as Bundler: Structure from Motion (SfM) for Unordered Image Collections developed by Noah Snavely.







The VisualSFM GUI developed by Changchang Wu includes the same software combined into a single computer program.

As we can see there is great potential for the cloud processing of photogrammetry models, whether by commercial companies or with open source software. This can remove the need for expensive photogrammetry software and the expertise to use it. Photographs can be uploaded as they are taken, or shortly afterwards and the processing begun before the person recording leaves the site. This obviously saves the time it would take to return to the office and download the photographs.

One current limitation of the open source system is the fact that it is solely aimed at photogrammetric vertical mapping recording, meaning that there is no need to mask the photographs in the software. There is however a requirement for the masking of photographs within the photogrammetric recording of standing structures and objects where elements of the photographs need to be masked out in order to get the best results.

Agarwal, Sameer, Noah Snavely, Steven M. Seitz, and Richard Szeliski. “Bundle adjustment in the large.” In Computer Vision–ECCV 2010, pp. 29-42. Springer Berlin Heidelberg, 2010.

Agarwal, Sameer, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M. Seitz, and Richard Szeliski. “Building rome in a day.” Communications of the ACM 54, no. 10 (2011): 105-112.

Furukawa, Yasutaka, Brian Curless, Steven M. Seitz, and Richard Szeliski. “Towards internet-scale multi-view stereo.” In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 1434-1441. IEEE, 2010.

Furukawa, Yasutaka, and Jean Ponce. “Accurate, dense, and robust multiview stereopsis.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32, no. 8 (2010): 1362-1376.

Wu, Changchang, Sameer Agarwal, Brian Curless, and Steven M. Seitz. “Multicore bundle adjustment.” In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 3057-3064. IEEE, 2011.

Wu, Changchang. “Towards linear-time incremental structure from motion.” In 3D Vision-3DV 2013, 2013 International Conference on, pp. 127-134. IEEE, 2013.


Microsoft HoloLens

The HoloLens was originally announced on 21 January 2015.

But it was the systems appearance at the E3 expo in America that demonstrated its abilities, including those for playing minecraft in an innovative way.

Although similar in appearance to Virtual Reality goggles the system works in a completely different way, by projecting virtual components (holograms) into a view of the real world, this is called Augmented reality.

It is believed that the HoloLens will cost between £300-£600. It is not likely that it will be available before 2016.

This system has great potential for a number of areas within Archaeology and Cultural Heritage including virtual museums, where 3D recreations of artifacts are visible to the viewer next to fragments. Or for site tours/or visits to sites of historical interest where a 3D recreation model of the site is visible on top of the excavated remains.

Those who have used it have suggested that it has a limited field of view for the Immersive elements. And it is quite expensive.