Tag Archives: drone

Using existing mapping data to control UAV mapping flights – Part 1 – Preliminary Ideas and Experimentation

An intrinsic problem with photogrammetry is its requirement to keep the camera facing the subject matter. A much higher quality and more accurate 3D model is produced using the method than taking photographs at an oblique angle. This is especially true of buildings with with flat facades, (this has already been discussed in another blog).

Work has been done using computer vision to automate the control of the camera position so that it follows targets selected by the pilot. Although this has potential for some recording methods such as site tours, as discussed in another blog, it doesn’t aid in the recording of complex topography or architecture. Although there is potential for the recording of architectural elements using computer vision  technologies (this will be discussed in a later blog).

Other work is being done in using a low detail 3D model of a building to aid in the control of a UAV flying around it, but these are more aimed at collision avoidance than quality recording.

While in the future i plan to look at the potential pre-scanning a building with an aerial LiDAR scanner mounted on a drone before recording with UAV.

Potential solution

The camera gimbal of a UAV can be controlled both remotely and from the autopilot of the UAV which could be used to always keep the camera facing the subject matter, but without pertinent information this would have to be done manually. With wireless camera technology it is possible to remotely view what the camera is recording and so control the movement of the gimbal when required, but this would require a second person to control the camera while the UAV is being flown and would be difficult to implement effectively and costly in a commercial environment.

But it would seem to be possible to use existing 3D data of an area to control the flight of a UAV; both controlling the altitude and the angle that the camera gimbal is pointing. I have already discussed the use of DroneKit Python to create a UAV mapping flight, thish can also be used to control the angle of the camera gimbal.

Existing Data

There are a number of existing sources of data that can be used to aid in creating a mapping flight.

Within the UK LiDAR data is freely available at different spatial resolutions, much of the country is available down to 1m while other areas are available down to 0.25 m.

This resource through processing in GIS (Geographic Information System) software provides all of the information required to create a flight path over the area under study and to control the angle of the camera gimbal so that it will record it to a higher quality than before.

A digital elevation model (DEM) created using photogrammetry from existing overlapping aerial photographs can also be employed once it is georeferenced to its correct location. This resource may provide a higher spatial resolution than the LiDAR data and so a better resource for the creation of the flight path, but the landscape and structures may have changed since the photographs were taken causing problems (this can of course be a problem with the LiDAR data as well).

Co-ordinate system problems

One complication with using LiDAR data to control the UAV is the fact that it is in a different co-ordinate system than the GPS of the UAV (OSGB and WGS84). This can be solved be translating one set of data to the co-ordinate system of the other. As the number of points for the mission path will be a lot less than that for the LiDAR data it would make sense to convert the GPS data to OSGB, but this also requires that it be converted back after the flight path has been created added a certain amount of inaccuracy into the data as a conversion is never 100% accurate.

required Data

Three different pieces of data need to be derived from the LiDAR data which are required for the UAV mapping flight:

  • Altitude.
  • Slope.
  • Aspect.

The Altitude is contained within each point of the LiDAR data and is used when displaying the data in GIS software.

The Slope of the topography/buildings is measure in increments up to 90 degrees, with o degrees being flat and 90 degrees being a vertical face.

The Aspect is which way any slope is pointing in is measured in increments from 1-360 degrees. (degrees).

 

Slope_angles

Slope angles

Although it would be possible to create software that extracts the data from the LiDAR file while creating a flight path this is not currently an option. The flight path is currently created in a piece of software such as the Open Source ‘Mission Planner’ system. In this an area is chosen together with other variables and an optimal flight path is created. This flight path file can then be saved, it contains the X and Y co-ordinates of each point of the mission.

UAV Control

At its simplest the flight path can be created with the altitude and slope derived from the LiDAR being used to control both the UAV altitude and camera gimbal angle. This would work well for sloping topography but would be more complicated for areas with sharp breaks in slope (such as buildings).

Altitude Control

The altitude will need to be carefully controlled to make sure that the quality of the imaginary is consistent across the whole area under study. At its simplest this is easy to do using the altitude data within the LiDAR data, together with obstacle avoidance sensors to aid with safety.

The problem arises when needing to record something near or completely vertical. Rather than requiring a set altitude the UAV needs to maintain a set distance horizontally. This may be possible by creating a buffer in the data around steeply sloping areas.

Drone_flight_path

Problem with vertical offset

Camera Gimbal Control

Most low cost UAV systems come with a 2-axis gimbals, this means that the camera is stabilized so that it always stays in a horizontal plane but also that its rotation can be controlled downwards.

Gimbal_angle

The angle of the gimbal begins at 0 degrees for a forward pointing position to 90 degrees for a downwards facing position. This is how is its controlled within DroneKit.

As seen earlier the slope is calculated between 0 and 90 degrees for a slope.

There are two intrinsic problems with this method:

  1. The slope only goes between 0 and 90 degrees so there is no aspect data within it. If the drone camera is to be controlled to record the building as it flies over if needs to know which way the building is pointing as the 45 degrees on the left is not the same as the 45 degrees on the right. This could be solved by combining the information from the slope and aspect to give more detailed resulting data.
  2. Most standard gimbals are designed to only point forwards and downwards. This means that the UAV has to turn around to record the back side of the building or it needs to fly the path in reverse. The other solution is to use a UAV with a camera that can point in 360 degrees.

GIS Processing

A certain amount of processing is required within GIS software to get the required data from the LiDAR data and combine it with the required mapping flight path. For this ArcGIS has been used both due to its availability at university and my own familiarity with it.

Lidar

Considering the LiDAR data is for a specific square it makes sense to use raster data rather than the points and lines of vector data as it retains the accuracy of the data. The LiDAR data can be simply loaded into the GIS software as a raster.

Within GIS software the Aspect and Slope can be calculated and a raster created showing the results.

This can be done using the Spatial Analyst or 3D Analyst Toolboxes to provide Slope and Aspect rasters.

The data for these can be incorporated into Attribute Tables which can be exported into text files. It is possible to combine all of the data into one attribute table containing the Altitude, Slope and Aspect.

Although it is possible to export this whole raster file including all of the data it is not currently possible to automatically derive the data in software using the flight path, so a flight path has to be loaded into the GIS software.

Flight Path

The flight path file we created in the Mission Planner software needs to be loaded into a feature class in the GIS software. This can be done by loading the point data for the flight path into the software, this is the beginning and end point for each of the back and forth paths across the area needed to be recorded.

We next need to recreate the flight path using ‘point to line’.

Even though we have recreated the lines, deriving enough data from them is not possible as the flight path is designed to fly back and forth at a set altitude. For this reason we need to create a number of extra points. This can be done using ‘Construct points’ where points can be created at set intervals along a line. This can be linked to the level of data that is being used, so for this LiDAR data the points can be set at intervals of 1m.

Once this his has been done ‘Extract multi-values to points’ can be run on the 3 sets of data to create a table containing all of the required data for each point on the flight path we have created.

UAV Mission Creation

Now that we have all of the required input data for the UAV mapping flight we need to create the mission within Dronekit Python.

For the first level of experimentation we can just load the point data file into python then create a number of points for the UAV to fly to which give the X and Y co-ordinates and the required altitude. At the same time we can also program in the angle for the camera gimbal. It may be best to have the UAV hover at the positions for a second or two so that we know how the recording is going.

As already mentioned if we are only using a 2-axis gimbal we are going to have to have the UAV turn through 180 degrees to record the back sides of buildings and slopes sloping away from the camera. We should be able to do this by altering the UAV Yaw. We will need to have the Python read the aspect angle and change how it creates the flight path depending on the aspect of the slope/building.

Future Directions

ArcGIS allows the use of Python to run tools in its toolbox so it seems possible to create a python script which would automatically create a file with all of the information required from input files of LiDAR data and a flight path.

As QGIS also allows the use of python it would also seem possible to create the required file within this open-source solution.

 

Advertisements

UAV Building Facade Recording – Part 1 – Preliminary Ideas and Experimentation

The recording of buildings is an important area in Cultural Heritage, whether for conditional surveys or to record something that is about to be destroyed.

Traditional methods rely upon survey equipment such as Total Stations to take a number of points on the façade, but this results in only points and lines with no great surface detail.

Other more detailed survey techniques such as laser scanning and photogrammetry have also been employed. But laser scanning is expensive and both the techniques are generally ground based missing detail of the façade that is not visible from this position. Scaffolding or a cherry picker can be used to record the whole of the building but again this can add to the cost the recording.

Photogrammetry is a low cost method of producing high quality results but relies upon having the camera parallel to the building to produce the best results, as capturing photographs from an angle brings inaccuracies into the recording as well as there being more detail at the bottom of the 3D model created than at the top.

The UAV would seem to provide an ideal platform to carry a camera parallel to the building, recording photographs with the required photogrammetry overlap. And with its autopilot it would seem possible to automate the recording process allowing the mapping of the façade in the same way that the UAV  can map the ground.

There are of course a number of problems that need to be overcome.

Building Façade Recording

Manual

Building façade recording can be done manually with a UAV, but the larger and more complicated the building façade the mode difficult it is to do this accurately. As the pilot needs to control the UAV accurately in 3 dimensions as well as controlling its speed.

Although the results for an experimental UAV mission are acceptable the difficulty of maintaining a manual position can be seen in the image below.

Automatic

In order to automate the process you need to determine what parameters are required to record a building façade using photogrammetry.

These can be seen below.

Excel Calculations

Building facade recording parameters

First experimentation was done by taking the co-ordinates of the two ends of an example the wall from Google Earth (The south facing wall of the lay brothers’ quarters at Waverley Abbey in Surrey was used). These co-ordinates can then be used to determine the bearing that the wall lies upon and its width. Using the camera parameters and level of detail the required distance from the wall for the flight can be calculated using trigonometry. Trigonometry is once again used to calculate the offset positions for the left and right extent of the flight.

 

The image overlap can be used to determine the number of photographs required in the horizontal and vertical, and hence the change of altitude that is required for each flight pass of the building.

Calculate altitude changes

Calculate altitude

Although it is planned to have the ability for the UAV to hover and take photographs, it is much easier to have it take photographs as it flies across the building façade. This requires the additional calculations and control of optimum flight speed and shutter speed to take photographs which are not adversely effected by motion blur.

Shutter speed formula

Shutter speed formula

Shutter speed calculions

Shutter speed calculations

These preliminary calculations were done in Microsoft Excel.

DroneKit

The drone manufacturer 3DR provides a series of software development kits (SDKs) for writing applications to control your UAV using one of the open-source autopilot systems they support.

DroneKit Python uses the Python programming language and provides a number of examples to help with programming the flight of a UAV; these include flying from co-ordinate to co-ordinate up to complete missions. Together with this there is an API (application program interface) reference which provides all of the Python commands that can be used to control the UAV.

Python

Python is a fairly easy to learn programming language and as DroneKit already requires it to be installed and setup it makes sense to use the same language to calcuate the required paramaters for the flight path. This was done with the aid of a number of online resources. A graphical user interface (GUI) was created using the Tkinter Python package and was used to enter the data. The python code did the calculations then a file is exported which combines these calculations with the DroneKit code for controlling the autopilot. The final file when run will control the UAV flight.

Python GUI

Python GUI

Virtual Drone

Experimentation doesn’t need to be done with a live UAV, it can actually be done with a virtual one using a number of pieces of open-source software. These include Mission  Planner, ArduCopterMAVProxy and SITL (Software in the loop)

Virtual Drone

Virtual Drone

Next Steps

Experimentation with a UAV using the hardware and software is the next step to test whether a GPS can be used in close proximity to a structure.

Limitations of standard UAV GPS accuracy to within the range of meters also complicates the use of this method of controlling the flight. This either needs to be solved with the use of a more accurate GPS (although the proximity to the building may block the signal), sensors that measure distances or the use of computer vision technologies to control the UAV position. The UAV afterall currently only need to fly between two set points then at set altitudes above the ground.

DJI Phantom 4

The DJI Phantom 4 is the new model in the popular phantom range of quadcopters, it has a number of improvements over previous models.

DJI Phantom 4

DJI Phantom 4

Comparison of DJI Phantom 4 and 3

Model Phantom 4 Aircraft Phantom 3 Professional or Advanced Aircraft
Battery 4S 15.2V 5350mAh Intelligent Flight Battery 4S 15.2V 4480mAh Intelligent Flight Battery
Max Flight Time 28 mins About 23 mins
Vision Positioning System 10m 3m
Obstacle Sensing System Optical Sensor – 0.7 – 15m N/A
Intelligent Flight Modes Follow Me
Point of Interest
Waypoints
Course Lock
Home Lock
ActiveTrack
TapFly
Follow Me
Point of Interest
Waypoints
Course Lock
Home Lock

Using the TapFly mode you can tap on a position of the screen in the app to fly to that location.

One of its main improvements is the introduction of obstacle avoidance technology (Sense and Avoid) using cameras mounted above the legs on the front of the Phantom 4.
DJI Phantom 4 - Obstacle Avoidance

DJI Phantom 4 – Obstacle Avoidance

The system, and the subsequent technologies, rely on a companion computer within the drone attached to the various sensors which uses computer vision algorithms to detect obstacles in the drones path. Once it has detected an obstacle it will either hover or fly around it.

DJI Phantom 4 - Companion Computer

DJI Phantom 4 – Companion Computer

It also comes with an improved Vision Positioning System, for position hold without the aid of GPS, which raises the positioning altitude from 3m to up to 10m.

DJI Phanton 4 - Vision Positioning

DJI Phanton 4 – Vision Positioning

A final important new technology is ActiveTrack where a subject can be selected in the app, and once again using computer vision technologies, the Phantom 4 will follow the subject even when it is turning.

DJI Phantom 4 - Active Track

DJI Phantom 4 – Active Track

The DJI Phantom 4 is available for £1,229.00 and will be on general release from the 23rd of March. As such it will be the first commercially available drone with obstacle avoidance technology.

Benefits

The Phantom 4 provides a number of cutting edge technologies on a low cost platform. The benefit of ActiveTrack has already been discussed in a previous blog – UAVs for site tour recording – Part 1 – Theory while the potential of the sense and avoid and vision positioning system technologies will be discussed in a future blog on building recording.

Drawbacks

The main drawback of the system is the fact that the camera is not of the same quality as the Zenmsue X5 which is available for the DJ Inspire 1 Pro/Raw. But even this camera is not of the same specifications as many standard DSLR or mirrorless cameras, only providing 16MP.

3D Printing and the UAV

3D printing provides a cheap method of creating objects from 3D computer model files. This, together with recent development in the field, have great potential for the future of the Unmanned Aerial Vehicle (UAV) industry.

3D Printing Parts
Many ready built UAVs can be purchased off the shelf configured to work with a number of different cameras building, but DIY systems can require parts that are not available from traditional sources. This is where the Maker community can come in, whether providing 3D models on such sites as Thingiverse to be 3D printed yourself or at a number of 3D printing shops; or providing ready 3D printed objects over the internet.

Among the objects that are useful to be 3D printed for UAVs are camera specific mounts and mounts for radio antennas. Complete 3D printed UAV frames are also possible.

Recent Developments
Until recently the materials that could be printed were limited, namely only thermoplastics or UV resins for the UAV body, recent developments have allowed the printing of everything from metal to human tissue and organs and even food; opening up whole new potential areas of use.

One example is the research by Dr. Jennifer A. Lewis, a founder of the Voxel8 company and Harvard University professor, which has led to 3D printers being able to print circuits such as the Voxel8 3D Printer developed by her company.

Conductive Ink Printing

Conductive Ink Printing

The Voxel8 3D Printer ships towards the end of the year.

Future developments planned by Voxel8 include the development of inks that are capable of printing resistors, sensors and even lithium ion battery cells.

In collaboration with Autodesk they have developed the Project Wire software which allows everything from design through to machine control of electrical circuits.

Autodesk Project Wire

Autodesk Project Wire

Potential
3D Printers have already been used for printing UAV components, but these recent developments open up the possibility of 3D printing almost complete UAVs in the near future. This would allow for UAVs specific to a task to be designed and printed on demand without the requirement of expensive manufacturing practices.

It would also link in with the idea of drones owning themselves, discussed in a previous blog, with the drones being able to print replacement or upgrade components straight from a 3D printer.

Drones Owning Themselves

On BBC Radio 4’s program FutureProofing of the 16th September the software developer Mike Hearn discussed the potential of cars and drones owning themselves. His ideas build upon collaborative open ventures such as Apache and Linux.

Cars
The cars could act as autonomous one machine businesses which would charge people for rides, then from the profits they could buy fuel, repair themselves and even buy upgrades. They would begin as a new car from a factory but would then become self sustaining/financing with even the ability to purchase a new upgrade car from the factory. Hearn suggests if the autonomous vehicles owned themselves they would provide cheaper fairs than those owned by major corporations.

This links in with the ambitious plans in the Finnish capital Helsinki to provide a ‘comprehensive, point-to-point “mobility on demand” system’ allowing people to purchase transportation options directly from a phone app linking in with availability of everything from driver-less cars, taxis, buses, bikes and ferries on the required route. This could potentially do away with the requirement of car ownership within the city by 2025 by beating it on cost and convenience.

More details of the car aspect of the idea can be found in an article at the BBC News Website.

Drones
He also discussed the potential for the development of delivery services for packages and parcels by drones; where the drones would sell their services to people or companies and use the money earned to maintain themselves. If demand in area was reduced they could move to an area with more demand.

Amazon are currently developing their Amazon Prime Air drone package delivery service.


Wile the first drone delivery took place in the USA in July.

Potential
With the growing importance of drones within archaeological recording this has great potential to make it easier and cheaper for companies to employ this kind of technology without the significant outlay that is required. Drones with a number of different recording packages could be setup in useful locations around the country. They could then be employed by a company or individual for a purpose and transported to the site by the individual who pays for their services. The drone would be paid and would use the finances to pay for charging, repairs and upgrades.

This could obviously go one step further with the introduction of driver-less vehicles; with the drone being based in a vehicle which deploys it to the required location, contains the software required for the processing of the data recorded and its control interface.

Limitations
The ideas of Mike Hearn are only a “thought experiment” and he is not involved in their development, although he is closely involved with the Bitcoin virtual currency which could be used as a method for the drone to pay for itself.

Autonomous Systems Launch Event – University of Southampton

On the 20th of March I was present at the Launch Event for the Autonomous Systems USRG (University Strategic Research Group) at the University of Southampton.

It included a number of 3 minute presentations some of which were very pertinent to the autonomous recording of Archaeology and Cultural Heritage.

Control and Implementation of Autonomous Quadcopter in GPS-denied Environments – Chang Liu

Chang Liu is a PHD student in the department of Engineering.

He has been working in using Optical flow technology and ultrasonic sensors to control the velocity of UAVs using his own autopilot system in an environment without the ability to use GPS.

He is currently perfecting a system using a monocular uEye camera and Quad Core Linux ODROID computer running the ROS (Robot Operating System) using SLAM (Simultaneous Localization And Mapping) algorithms to enable the single camera to identify natural features and act as a more advanced position hold technology.

DSC_0193

Chang Liu’s Autonomous Quadcopter

DSC_0195

Chang Liu’s Analysis Software

 

Autonomous UAVs for Search and Rescue and Disaster Response

Dr. Luke Teacy of the Department of Electronics and Computer Science (ECS) discussed the use of autonomous UAVs in search and rescue and disaster response by co-ordination between multiple platforms. Their low cost and the fact that they are easy to deploy make them ideal for the purpose.  Camera equipped UAV can search for someone in the wilderness using computer vision to see the person. He is using observation modelling to see how the view of a person is effected by distance and how to maximise the information required to find a person. And discussed the issues of how to control UAVs and allocate them to a task using the Monte Carlo Tree search algorithm as well as path planning.

Human Agent Collaboration for Multi-UAV Coordination

Dr. Sarvapali Ramchurn of the Department of  Electronics and Computer Science (ECS) discussed the MOSAIC (Multiagent Collectives for Sensing Autonomy Intelligence and Control) Project. His work involved the allocation of tasks to UAVs, which may have different capabilities, by a human-agent.  Once a task is set the nearest drone will move to complete the task. By teaming the UAVs up to accomplish tasks it maximises efficiency. If a UAV fails a new one takes it place, while when new tasks are allocated a UAV is reallocated to the task.  He discussed using the Max-Sum algorithm to coordinate tasks between the UAVs autonomously.

An intelligent, heuristic path planner for multiple agent unmanned air systems

Chris Crispin is a PhD program in Complex Systems Simulation and is part of the  ASTRA environmental monitoring Project. It involves a group of unmanned vehicles co-coordinating with each other in the mapping. Feasible optimal flight paths are designated and searched along. Once the UAVs begin flying areas are designated with a level of uncertainty by a central computer, and this determines whether a UAV is sent to this area, with the higher the uncertainty the more likely a UAV will be dispatched to map it.

The UAVs use an ODROID-C1 Quad Core Linux computer using a a PX4 Autopilot, while the control computer is an ODROID-XU3. The system uses the JSBSim open source flight dynamics model (FDM).

https://sotonastra.wordpress.com/

Archaeology and autonomous systems

Dr. Fraser Sturt of the Department of Archaeology discussed the various potential applications of autonomous systems in archaeology. This included survey and site identification by high resolution photographic and topographical mapping. He also discussed the benefits of multi-spectral imaging in seeing adobe (mud brick) structures, and how the results have shown that rather than collapsing the Nasca civilisation had moved to the coast. Next he discussed the potential of GPR (Ground Penetrating Radar) being carried on UAVs. And finally discussed the fact that there are approximately 3 million shipwrecks worldwide which need to be studied/made stable.

3DRobotics Dronekit

3DRobotics have announced the release of DroneKit which offers an Open Source Software Development Kit (SDK) and web Application Program Interface (API) for developing drone apps. It works on systems powered by the APM flight code such as the ArduPilot, APM and Pixhawk autopilot systems, all supplied by 3DRobotics.

It allows the creation of custom purpose built UAV (Unmanned Aerial Vehicle) control apps without having to redesign the control system software.

The apps can be developed on three different platforms:

  1. Mobile apps with DroneKit Android.
  2. Web-based apps with DroneKit Cloud.
  3. Computer apps with DroneKit Python.

It enables the user to:

  • Control the flight path with waypoints.
  • Control a spline flight path with fine control over the vehicle velocity and position.
  • Set the UAV to follow a GPS target (Follow Me).
  • Control the camera and gimbal by setting Regions Of Interest (ROI) points which the camera locks on to.
  • Access full telemetry from the UAV using 3DR Radio, Bluetooth, Wi-Fi, or over the internet.
  • Playback and analyse the log of any mission.

The advantages of DroneKit are:

  • It is truly open unlike the similar DJI SDK, without levels of access.
  • Once an app has been created the interface is always the same across different computing platforms.
  • It can be used with planes, copters and rovers.
  • It works on laptop computers as well, mobile devices and vehicle data can even be accessed via the web.

DroneKit already powers a number of flight control programs:

  • The Tower (formerly Droidplanner) flight planning mobile app for Android was built on DroneKit for Android.

Tower (DroidPlanner 3)

  • Droneshare is a global social network for drone pilots that allows them to view and share missions, it is built on DroneKit web services.
  • Googles Project Tango Indoor Navigation is built on the Pixhawk and APM sutopilot systems and the Tower flight planning app.
  • The IMSI/Design TurboSite aerial reporting app for construction allows the setting up of flight waypoints to GPS locations and the capturing of photographs, videos, dictations, text notes and “punch list” action items. Photographs can be annotated while the UAV is still in flight using markup and measurements tools.