Tag Archives: camera

News – UK Drone Show 2016

New Drones

DJI

The DJI stand held two of their new models.

The Mavic Pro is a portable system with collapsible arms allowing it to fit into a small backpack. Its’ FlightAutonomy technology allows obstacle avoidance and hover precision, while ActiveTrack allows the drone to follow the subject matter with a number of different shooting modes. It comes with a 3-axis gimbal and 4K camera.

It is available for pre-order for £1,099.

The Phantom 4 Pro is an upgrade of the Phantom 4 . It improves on a number of areas of the previous drone:

  • Improved camera with a 1-inch 20 megapixel sensor from a 1/2.3 inch 12.3 megapixel sensor on the Phantom 4.
  • Stereo vision sensors on the rear of the drone in addition to the front facing ones that were on the original.
  • New infrared sensors on the left and right of the drone.
DJI Phantom 4 Pro

DJI Phantom 4 Pro

The Phantom 4 Pro costs £1,589, while a version where the remote controller has an integrated screen costs £1,819.

Yuneec

A new combined thermal and RGB camera for the Typhoon H drone was available.

The camera costs £1,799.

New Technologies

A number of technologies under development were on view in the Innovation Zone.

Tetradrone

An interesting concept of combined drone and submersible, changing the type of propeller allows the drone to either fly or go underwater with the body being watertight.

www.tetradrones.co.uk

http://jgarnham94.wixsite.com/tetra

https://www.facebook.com/TetraDrones/?hc_ref=SEARCH

Tetra Drone

Tetra Drone

Available for funding soon on Kickstarter.

Seadrone

An innovative modular underwater drone by van Dijk FEM engineering B.V. with a camera and a crab module for removing material from the sea bed.

http://seadrone.nl/

https://www.facebook.com/SeaDroneNL

SeaDrone

SeaDrone

Droneball

Another innovative technology was droneball, which was a drone fully enclosed in a cage stabilizing system which protects the drone and its surroundings from damage as it flies around, it either fly or roll along the ground.

DroneBall

DroneBall

The droneball is getting launched on indiegogo on the 8th of December.

https://www.indiegogo.com/projects/droneball-the-bouncing-crash-resistant-drone-drones/coming_soon

 

 

Advertisements

Using existing mapping data to control UAV mapping flights – Part 1 – Preliminary Ideas and Experimentation

An intrinsic problem with photogrammetry is its requirement to keep the camera facing the subject matter. A much higher quality and more accurate 3D model is produced using the method than taking photographs at an oblique angle. This is especially true of buildings with with flat facades, (this has already been discussed in another blog).

Work has been done using computer vision to automate the control of the camera position so that it follows targets selected by the pilot. Although this has potential for some recording methods such as site tours, as discussed in another blog, it doesn’t aid in the recording of complex topography or architecture. Although there is potential for the recording of architectural elements using computer vision  technologies (this will be discussed in a later blog).

Other work is being done in using a low detail 3D model of a building to aid in the control of a UAV flying around it, but these are more aimed at collision avoidance than quality recording.

While in the future i plan to look at the potential pre-scanning a building with an aerial LiDAR scanner mounted on a drone before recording with UAV.

Potential solution

The camera gimbal of a UAV can be controlled both remotely and from the autopilot of the UAV which could be used to always keep the camera facing the subject matter, but without pertinent information this would have to be done manually. With wireless camera technology it is possible to remotely view what the camera is recording and so control the movement of the gimbal when required, but this would require a second person to control the camera while the UAV is being flown and would be difficult to implement effectively and costly in a commercial environment.

But it would seem to be possible to use existing 3D data of an area to control the flight of a UAV; both controlling the altitude and the angle that the camera gimbal is pointing. I have already discussed the use of DroneKit Python to create a UAV mapping flight, thish can also be used to control the angle of the camera gimbal.

Existing Data

There are a number of existing sources of data that can be used to aid in creating a mapping flight.

Within the UK LiDAR data is freely available at different spatial resolutions, much of the country is available down to 1m while other areas are available down to 0.25 m.

This resource through processing in GIS (Geographic Information System) software provides all of the information required to create a flight path over the area under study and to control the angle of the camera gimbal so that it will record it to a higher quality than before.

A digital elevation model (DEM) created using photogrammetry from existing overlapping aerial photographs can also be employed once it is georeferenced to its correct location. This resource may provide a higher spatial resolution than the LiDAR data and so a better resource for the creation of the flight path, but the landscape and structures may have changed since the photographs were taken causing problems (this can of course be a problem with the LiDAR data as well).

Co-ordinate system problems

One complication with using LiDAR data to control the UAV is the fact that it is in a different co-ordinate system than the GPS of the UAV (OSGB and WGS84). This can be solved be translating one set of data to the co-ordinate system of the other. As the number of points for the mission path will be a lot less than that for the LiDAR data it would make sense to convert the GPS data to OSGB, but this also requires that it be converted back after the flight path has been created added a certain amount of inaccuracy into the data as a conversion is never 100% accurate.

required Data

Three different pieces of data need to be derived from the LiDAR data which are required for the UAV mapping flight:

  • Altitude.
  • Slope.
  • Aspect.

The Altitude is contained within each point of the LiDAR data and is used when displaying the data in GIS software.

The Slope of the topography/buildings is measure in increments up to 90 degrees, with o degrees being flat and 90 degrees being a vertical face.

The Aspect is which way any slope is pointing in is measured in increments from 1-360 degrees. (degrees).

 

Slope_angles

Slope angles

Although it would be possible to create software that extracts the data from the LiDAR file while creating a flight path this is not currently an option. The flight path is currently created in a piece of software such as the Open Source ‘Mission Planner’ system. In this an area is chosen together with other variables and an optimal flight path is created. This flight path file can then be saved, it contains the X and Y co-ordinates of each point of the mission.

UAV Control

At its simplest the flight path can be created with the altitude and slope derived from the LiDAR being used to control both the UAV altitude and camera gimbal angle. This would work well for sloping topography but would be more complicated for areas with sharp breaks in slope (such as buildings).

Altitude Control

The altitude will need to be carefully controlled to make sure that the quality of the imaginary is consistent across the whole area under study. At its simplest this is easy to do using the altitude data within the LiDAR data, together with obstacle avoidance sensors to aid with safety.

The problem arises when needing to record something near or completely vertical. Rather than requiring a set altitude the UAV needs to maintain a set distance horizontally. This may be possible by creating a buffer in the data around steeply sloping areas.

Drone_flight_path

Problem with vertical offset

Camera Gimbal Control

Most low cost UAV systems come with a 2-axis gimbals, this means that the camera is stabilized so that it always stays in a horizontal plane but also that its rotation can be controlled downwards.

Gimbal_angle

The angle of the gimbal begins at 0 degrees for a forward pointing position to 90 degrees for a downwards facing position. This is how is its controlled within DroneKit.

As seen earlier the slope is calculated between 0 and 90 degrees for a slope.

There are two intrinsic problems with this method:

  1. The slope only goes between 0 and 90 degrees so there is no aspect data within it. If the drone camera is to be controlled to record the building as it flies over if needs to know which way the building is pointing as the 45 degrees on the left is not the same as the 45 degrees on the right. This could be solved by combining the information from the slope and aspect to give more detailed resulting data.
  2. Most standard gimbals are designed to only point forwards and downwards. This means that the UAV has to turn around to record the back side of the building or it needs to fly the path in reverse. The other solution is to use a UAV with a camera that can point in 360 degrees.

GIS Processing

A certain amount of processing is required within GIS software to get the required data from the LiDAR data and combine it with the required mapping flight path. For this ArcGIS has been used both due to its availability at university and my own familiarity with it.

Lidar

Considering the LiDAR data is for a specific square it makes sense to use raster data rather than the points and lines of vector data as it retains the accuracy of the data. The LiDAR data can be simply loaded into the GIS software as a raster.

Within GIS software the Aspect and Slope can be calculated and a raster created showing the results.

This can be done using the Spatial Analyst or 3D Analyst Toolboxes to provide Slope and Aspect rasters.

The data for these can be incorporated into Attribute Tables which can be exported into text files. It is possible to combine all of the data into one attribute table containing the Altitude, Slope and Aspect.

Although it is possible to export this whole raster file including all of the data it is not currently possible to automatically derive the data in software using the flight path, so a flight path has to be loaded into the GIS software.

Flight Path

The flight path file we created in the Mission Planner software needs to be loaded into a feature class in the GIS software. This can be done by loading the point data for the flight path into the software, this is the beginning and end point for each of the back and forth paths across the area needed to be recorded.

We next need to recreate the flight path using ‘point to line’.

Even though we have recreated the lines, deriving enough data from them is not possible as the flight path is designed to fly back and forth at a set altitude. For this reason we need to create a number of extra points. This can be done using ‘Construct points’ where points can be created at set intervals along a line. This can be linked to the level of data that is being used, so for this LiDAR data the points can be set at intervals of 1m.

Once this his has been done ‘Extract multi-values to points’ can be run on the 3 sets of data to create a table containing all of the required data for each point on the flight path we have created.

UAV Mission Creation

Now that we have all of the required input data for the UAV mapping flight we need to create the mission within Dronekit Python.

For the first level of experimentation we can just load the point data file into python then create a number of points for the UAV to fly to which give the X and Y co-ordinates and the required altitude. At the same time we can also program in the angle for the camera gimbal. It may be best to have the UAV hover at the positions for a second or two so that we know how the recording is going.

As already mentioned if we are only using a 2-axis gimbal we are going to have to have the UAV turn through 180 degrees to record the back sides of buildings and slopes sloping away from the camera. We should be able to do this by altering the UAV Yaw. We will need to have the Python read the aspect angle and change how it creates the flight path depending on the aspect of the slope/building.

Future Directions

ArcGIS allows the use of Python to run tools in its toolbox so it seems possible to create a python script which would automatically create a file with all of the information required from input files of LiDAR data and a flight path.

As QGIS also allows the use of python it would also seem possible to create the required file within this open-source solution.

 

News – 3DR and Sony UMC-R10C

3DR have announced the integration of the new Sony UMC-R10C Lens Style Camera into there Solo UAV platform. This will include a custom gimbal for the camera. It will replace the current GoPro camera with one of the quality of a DSLR camera capable of taking 20MP+ photographs.

The camera appears to be similar to the Sony ILCE-QX1 discussed in a previous blog – Mirror-less Cameras and UAVs. Although the technology had great potential for use in UAVs due to the fact it does not have the body and weight of a normal camera, it had the serious limitation of not having a full manual mode. Hopefully this will be remedied in the new model.

Sony UMC-R10C

Sony UMC-R10C

The UMC-R10C is going to be released by 3DR as part of a complete mapping and processing solution called SiteScan.

It is not currently known whether the camera and gimbal will be available separately at present.

The camera will be unveiled at the NAB (National Association of Broadcasters) Show in Las Vegas in April.

While the package combined with SiteScan is expected to ship in June.

High Dynamic Range Photography/Photogrammetry – Part 1

High Dynamic Range (HDR) Photography

High Dynamic Range is a popular photographic technique that is used to produce more realistic photographic results or artistic images. It is a technique that can be used to try and replicate what the human eye can see as the dynamic range of a camera is limited and it is unable to record the lightest and darkest elements in a single photograph. This can be remedied by taking a number of photographs with varying shutter speed/aperture combinations and combining them using specialist software to produce a photograph with a greater dynamic range than can be recorded in a single photograph.

Many new digital cameras have the ability to produce HDR photographs using Auto Exposure Bracketing (AEB), special HDR settings (this may process the images for you resulting in an HDR photo on the camera but losing the originals) or manually setting the camera up.

Limitations of archaeological and cultural heritage photography

An intrinsic problem with taking photographs in archaeological and cultural heritage contexts is lighting; both too much and too little lighting are factors that hamper recording images that include as much detail as possible.

In the case of archaeological excavations attempts have been made to limit the problems in section photography by either reflecting more light in using white card or using a tarpaulin to cast a shadow. Both of these techniques work but require time and manpower.

The same problem is encountered in building recording where in an outside environment strong lighting can cause both bleached out areas and heavy shadows.

Netley Abbey - East Window

Netley Abbey, Hampshire – East Window. Photograph demonstrating the problem of strong oblique lighting causing both too much and too little light in the same image.

While lighting through windows can cause similar problems on the inside of buildings.

Netley Abbey, Hampshire - Sacristy/Library.

Netley Abbey, Hampshire – Sacristy/Library. Photograph demonstrating the effect of excessive light coming through a window causing the dual problems of both too much light near to the window and too little light in other areas.

In order to reveal elements in dark shadow a high exposure camera setting is required, while bright bleached out areas are only revealed with low exposures. These elements together with well lit areas cannot by revealed in one single photograph, this is where High Dynamic Range photography comes in.

HDR Photography in Archaeology

HDR Photography was introduced to Archaeology by David Wheatley in 2010, he provided examples of its use in improving the standard recording methods of excavations, cave sites and even using archived analogue archaeological photographs. Sadly its use was not embraced by the community probably due to technological limitations at the time and the inherent conservatism of the industry and museum archives which were yet to embrace digital photographic technology. Technology and the industry have now caught up with his ideas, with digital cameras being present on most if not all excavations, while other scholars have now begun to bring the technology to the technique of 3D recording using photogrammetry.

HDR Field Archaeology Photography

Photography is one of the primary recording techniques within field archaeology and has been since the introduction of discipline, but conservatism within field archaeology has meant that it was only fairly recently that digital photography became the primary recording technology.

Digital cameras have a number of benefits within archaeology:

  • The ability to take numerous photographs on one memory card.
  • No need to pay to process films.
  • No need to digitize the photographs.
  • Where once excavators may have been told to limit the number of photographs taken on an excavation to keep the processing costs down, digital media allows almost limitless photographs to be taken.
  • Photographs can be as easy as point and click with the camera controlling all of the settings.

But they also have drawbacks:

  • Where once archaeologists knew how to use an analogue camera to take bracketed shots, the automatic setting on digital cameras is commonly the only setting used as it produces results at a required level of quality, this means that the archaeologist may not know how to properly operate the camera.
  • Although almost limitless photographs can be taken, limits should be included as the archive may still need to be sorted through.
  • The requirements for digital storage can be complicated and costly.

Although not ideal, a number of modern cameras now come with an HDR setting on them which in many cases can be changed to the required level of bracketing, although only the merged photograph is saved losing the possibility of later re-processing the photographs with different settings.

Field Archaeology Archive Photographs

One benefit of traditional bracketing of analogue photographs for archaeological excavations is that they provide an ideal resource for conducting HDR processing. These archives have multiple photographs at different exposure levels which can be digitized and processed to provide better results than the originals and be re-entered into the archive with the digitized originals.

HDR using archive slides from excavations at the Cove, Avebury using archive slides (Wheatley 2010)

HDR using archive slides from excavations at the Cove, Avebury (Wheatley 2010)

HDR Building Photography

Building recording is an area that can be significantly enhanced by the use of HDR. It is difficult to provide adequate lighting in many cases, meaning that some areas are brightly illuminated while others are dimly lit loosing information in both cases.

Processing bracketed images into an HDR image provides a greatly enhanced image.

HDR Photogrammetry

Recent developments in camera technology, HDR software and photogrammetry software have allowed the introduction of HDR Photogrammetry. Thanks to the additional information present in the photographs models of higher detail and accuracy can be created in non-optimal lighting conditions.

As well as the ability to use tone mapped images produced from HDR images the Agisoft PhotoScan Photogrammetry software can also process .exr file format High Dynamic Range images into 3D models.

HDR Object Photogrammetry

One area under study is its use in photographing objects. The benefits are determined by the type of material used, some are greatly enhanced by HDR while others are little altered.

Image matching result from the images originated with different HDR processing: a) No HDR; b) tone mapped images from HDR processing (Guidi et al 2014)

Image matching result from the images originated
with different HDR processing: a) No HDR; b) tone mapped
images from HDR processing (Guidi et al 2014)

HDR Building Photogrammetry

We have already seen the benefits of HDR Photography in building recording and this can continue with photogrammetry.

Netley-Window-PG

Photogrammetry point cloud of the east window of Netley Abbey, Hampshire showing how the raking sunlight on the left-hand side of the window has bleached out the photographs and lost detail

Both the increased level of quality of the photograph and the higher amount of detail present in the 3D model can easily be seen in the HDR photogrammetry model.

Software Solutions

A number of software solutions are available for the processing of HDR photographs, these range from high end photographic software such as Adobe Photoshop and Lightroom, through to HDR specific pieces of software and even open source solutions. HDRSoft’s Photomatrix comes in a number of versions which include plugins for different software packages such as Adobe Lightroom, Photoshop Elements, Photoshop and Apple Aperture. With low cost solutions such as Fusion HDR or free open-source solutions such as LuminanceHDR also being available.

In order to be view-able on low contrast monitors and paper the images need to go through a process called tone mapping, this replicates the appearance of the high dynamic range photograph on these media.

Downloaded imaged can be batch processed in software such as Photomatrix setting up how many images need to be merged together with a number of preset or custom settings allowing the images to be processed exactly as required. These pieces of software can also compensate for slight movement between the recording of the multiple images. The resulting images can then be saved as either .hdr (Radiance) or .exr (OpenEXR) file formats which record the HDR information.

Batch processing of images within Photomatrix

Batch processing of images within Photomatrix

Benefits
HDR photography can record more information in both photographs and photogrammetry models. By using open Source HDR software it can be free. Many cameras allow multiple bracketed photographs to be be taken automatically only adding a few seconds to the recording process.

It is also possible in some of the software to open a folder full of images and have the software batch process it without any user intervention once the preset settings have been loaded.

Drawbacks
Among the drawbacks are the fact that as the camera is taking multiple photographs it is difficult to stabilize the camera by hand, otherwise there will be movement between the photographs. Although movement between photographs can be corrected if you are bracketing shots and using software the automatic HDR setting on the camera will probably result in a blurry image.

UAV HDR Photogrammetry

UAV HDR Photogrammetry is an area I will be studying in the future. It has great potential for recording but will require a careful balance of UAV hovering, a steady gimbal, fast shutter speed and an adequate depth of field. It will be discussed in a future blog.

Sources
Guidi, G., S. Gonizzi, and L. L. Micoli. “Image pre-processing for optimizing automated photogrammetry performances.” ISPRS Annals of The Photogrammetry, Remote Sensing and Spatial Information Sciences 2.5 (2014): 145-152.

Kontogianni, G., and A. Georgopoulos. “Investigating the effect of HDR images for the 3D documentation of cultural heritage.” World Cultural Heritage Conference 2014 – Euromed 2014 – International Conference on Cultural Heritage Documentation, Preservation and Protection. (2014)

Ntregkaa, A., A. Georgopoulosa, and M. Santana Quinterob. “Photogrammetric Exploitation of HDR Images for Cultural Heritage Documentation.” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 5 (2013): W1.

Wheatley, D. “High dynamic range imaging for archaeological recording.” Journal of Archaeological Method and Theory 18, no. 3 (2011): 256-271.

Brushless Gimbals – Part 1 – Introduction

Camera Gimbals
Camera Gimbals are used for many different things in many different industries including stabilizing cameras for TV/Cinema. Their development can be traced from the introduction of the Steadicam in the 1970’s. This allowed the stabilized movement of a camera, revolutionizing filming by removing the need for a wheeled Camera Dolly running on expensive/time consuming tracks or leveled boards. Although the system is not motorized it introduced the principal of a stabilized camera.

Recent technological developments have allowed the construction of lightweight/low cost motorized gimbal systems which can be carried by UAVs.

Gimbals and Archaeological/Heritage recording
With the development of the UAV the development of a lightweight camera gimbal to enable it to carry stabilized cameras was also begun.

The gimbal has become an important element in UAV photographic/video recording, from taking vertical photographs for mapping purposes to cinematic style flypasts/throughs of buildings.

Mapping can be undertaken with cameras attached to the UAV with a static mount, but this removes the ability to use the camera for other recording methods without landing the UAV and changing the mount.

3D Printed UAV Mapping Mount

3D Printed UAV Downward Facing Mapping Mount

Although this series of blogs will concentrate on UAV camera gimbals,  much of what is discussed is transferable to other recording platforms/techniques.

There are also other recording systems that use gimbals which could aid in recording; including handheld GoPro systems such as the EasyGimbal Kickstarter Project.

EasyGimbal handheld GoPro Gimabl

EasyGimbal handheld GoPro Gimbal

Some of these types of systems, such as the FY G4 handheld gimbal, can be attached to extensions poles allowing low altitude aerial photography/video to be undertaken using a handheld remote control to rotate the gimbal.

FY Reach extension pole with FY G4 3-xis handheld gimbal

FY Reach extension pole with FY G4 3-xis handheld gimbal

Gimbals can also provide a stabilized camera platform on rovers such as the Flyoxis Buggy Cam allowing the recording of ceilings and tunnels.

Flyonix Buggy Cam

UAV camera gimbals

UAV camera gimbals are designed to:

1. Remove camera vibration using the anti-vibration rubber balls within the gimbal frame.
2. Stabilize the camera as the UAV moves, keeping it level and pointing in the required direction.
3. Allow the movement of the camera to point at the subject matter while flying the UAV, sometimes in completely different directions.

Types of gimbal

Gimbals come in two different types:

1. The two-axis gimbal.
2. The three-axis gimbal.

Two-axis gimbals are designed for UAVs where there is no requirement to pan the camera from left to right, such as those with fixed landing gear which precludes the panning of cameras whether physically or visually.

Zenmuse H3-2D 2-Axiz Gimbal on a DJI Phantom 2

There are many different gimbals for numbers of different camera, from GoPros through mirrorless cameras to Digital SLR cameras.

Some of these can be purchased already constructed and calibrated out of the box, such as the Zenmuse Gimbals supplied by DJI Innovations. Others come ready installed on a UAV. While the one I will be discussing is a DIY kit which needs to be built and setup.

The price difference between buying a ready made solution and building your own one from a kit in order to carry the same camera can can be quite significant:

Model Camera Price
DYS BLG3SN 3-Axis Brushless Gimbal with BaseCam SimpleBGC 32-bit controller Sony NEX size camera £299.94
Zenmuse Z15 Sony NEX 5 and 7 £1,915.00

Components
The brushless gimbal is made up of a number of different components:

  • Gimbal frame
  • Gimbal controller
  • IMU (Inertial Measurement Units)
  • Brushless Motors
  • Battery
  • Camera
Gimbal Frame

Gimbal frames are deigned for different types of cameras. The gimbal frame I am using for this project is the DYS BLG3SN 3-Axis Brushless Gimbal Frame kit with 3pcs BGM4108-130 Brushless Motors for the SONY NEX type of camera. I will be using a Sony α5000 Mirrorless Camera which is almost identical to the NEX series cameras.

DSC00792

DYS 3 Axis Brushless Gimbal

Gimbal Controller

In order to control the gimbal a gimbal controller board is required, there are a number available on the market. The Zenmuse gimbals supplied by DJI Innovations are designed to connect directly into the DJI UAV, while other  solutions require a separate board.

The gimbal controller board one I am using is the BaseCam SimpleBGC 32-bit board which is designed for 3-axis gimbals. The cheaper and simpler BaseCam (AlexMos) SimpleBGC (formerly called AlexMos) although designed for 2-axis gimbals can be upgraded to support 3-axis gimbals with the addition of an extension board. The 32-bit board is a lot easier to use as well as being more up-to-date and so was chosen as a first gimbal construction experiment.

Basecam SimpleBGC 32 Bit Gimbal Controller with IMU attached

IMU (Inertial Measurement Unit)

Another important element is the IMU , in the case of this 3-axis gimbal two of these are required. One is connected to the main frame of the camera gimbal while the other is connected to the camera mount. These tell the gimbal controller which direction the gimbal/camera is pointing and the gimbal controller can then control the motors to point the camera in the required direction.

IMU attached to gimbal frame

Brushless Motors

The importance of brushless motors in the development of lightweight/high-powered UAV systems has already been discussed in another blog.

Those in gimbals are slightly different, rather than being designed to spin quickly they are designed to hold the camera in position with enough torque to stop it moving and also to rotate to level the camera when required.

In the case of a 3-axis gimbal one motor is required for each of the 3 axis.

DSC00798

Brushless Gimbal motor

Although originally it was required to rewind the wires inside motors designed for the rotor blade with thinner wires to increase the motor resistance and torque, it is now possible to buy ready made motors for the purpose. These motors come in different sizes depending on the size of the camera they are required to stabilize.

Calibration
In order to use the gimbal it needs to be calibrated. This is done using the OpenSource SimpleBCG program. The is installed either as a Windows program or Android app and the gimbal is calibrated using the USB port on the gimbal controller board.

Detailed instructions on how to do this can be found in many places including YouTube videos.

In the case of a 3-axis gimbal two IMUs need to be calibrated, one for the camera and the other for the gimbal frame.

SimpleBCG Gimbal Calibration Software

A triple axis camera spirit level can be used to accurately calibrate the two IMUs.

DSC00918

Camera Triple Axis Spirit Level

A number of other settings can be altered in order that the gimbal works as required.

3 Axis Brushless Gimbal for Sony NEX size cameras

3 Axis Brushless Gimbal for Sony NEX size cameras

Once the gimbal controller has been calibrated the camera will remain in place as the gimbal is moved around it. This is done by calibrating the IMUs to a nominal position, the IMUs determine the actual position of the gimbal and the motors are turned on to correct the position, less voltage is sent to the motors the closer to the nominal position that the gimbal is.

Sources
http://www.simplebgc.com/eng/

http://www.simplebgc.org/

http://www.unmannedtechshop.co.uk/3-axis-brushless-gimbal-sony-nex-size-camera/

http://www.unmannedtech.co.uk/manualsguides/blg3sn-brushless-gimbal-assembly-guide

http://www.dronetrest.com/t/how-to-connect-and-setup-alexmos-3-axis-brushless-gimbal-controller/53

http://www.dronetrest.com/t/balancing-your-brushless-gimbal/55

https://en.wikipedia.org/wiki/Gimbal

Mirror-less Cameras and UAVs

UAV (Unmanned Aerial Vehicle) photography and photogrammetry has long been a balance between weight and the quality of the camera equipment carried.

Cameras

Low cost camera solutions such as the GoPro can be carried on almost all UAVs because they are small and lightweight, but these benefits are also drawbacks because limited size/fish eye lenses and small image sensors reduce the quality of the photographs they take, together with this the lack of control of many of the camera settings is a drawback.

High quality DSLR (Digital Single Lens Reflex) cameras have superior quality lenses and image sensors together with the fact that they have extensive control of the camera settings meaning that they take much better photographs. But they can only be carried by much higher power/cost octo and hexo-copter systems.

One solution is the lightweight point-and-shoot camera/compact camera used in some mapping solutions, such as those provided by 3DRobotics (Canon PowerShot S100). Although these cameras provide a better quality solution than the GoPro, and may be all that is required for mapping exercises; they are still limited in their optics and higher megapixel sensors which are much more important in the recording of complicated structures and photogrammetry work.

Changes in the camera industry due to competition from the phone industry has enhanced development of a different solution. This is the MILC (Mirrorless Interchangeable-lens camera) or DSLM (Digital Single Lens Mirrorless) Camera. These cameras don’t have the mirror reflex optical viewfinder of a DSLR camera, and the associated weight, replacing it with a LCD screen or with an app on a mobile device which controls the camera. As a result they have the capability to carry high quality interchangeable lenses without the weight associated with DSLR cameras. The system comes in two different forms; the first resembles a standard digital SLR camera, while the second resembles just a lens with all control being provided by an app on a mobile device.

Camera Comparison
Camera Type Megapixel Weight Cost
Canon EOS 5D Mark III Digital SLR 22.3 Approx 950g £2,544
Nikon D5300 Digital SLR 24.2 Approx 840g £549.99
Sony A5000 DSLM Digital SLM 20.1 Approx 388g £250
Sony ILCE-QX1 Lens Style Camera 20.1 Approx 332g £250
Canon PowerShot S100 Compact Camera 12.1 Approx 198g £195
GoPro Hero3+ Black Sports Camera 12 74/136g (with housing) £349.99
Canon EOS 5D Mark III

Canon EOS 5D Mark III

Nikon D5300

Nikon D5300

α5000 E-mount Camera

α5000 E-mount Camera

ILCE-QX1 Lens-Style Camera

ILCE-QX1 Lens-Style Camera

3DRobotics UAV Mapping Solutions, discussed in another blog entry, carry the Canon PowerShot S100 digital compact camera.

Canon PowerShot S100

Canon PowerShot S100

GoPro Hero3+ Black

GoPro Hero3+ Black

UAVs

UAVs come in a number of different configurations and increase in price with a higher level of complexity and ability to carry heavier loads.

UAV Comparison
UAV Type Payload Capacity Price (Without Gimbal)
3D Robotics Iris+ Quadcopter 400g £599
3D Robotics X8+ Octocopter 800g – 1Kg with reduced flight time £880
Spreading Wings S900 Hexacopter 4.7 – 8.2Kg £1,291-£1,540
DJI Spreading Wings S1000+ Octocopter 11Kg £1,750-£2,057
3D Robotics Iris+ Quadcopter

3D Robotics Iris+ Quadcopter

3D Robotics X8+ octocopter

3D Robotics X8+ Octocopter

Gimbals

Gimabls are an important element in stabilizing cameras during photography and video recording, as well as providing a motorized solution to move the camera to a desired angle during flight. They can add significantly to both the weight and price of any UAV solution depending on the camera equipment they are carrying.

Gimbal Price Comparison
Gimbal Camera Weight (Camera excluded) Cost
DJI Zenmuse H4-3D GoPro 168g £249
DYS 3 axis brushless gimbal Sony NEX size camera 388g £231.95 – £299.94
DJI Zenmuse Z15-A7 Sony α7s and α7r 1.3Kg £1,915
DJI Zenmuse Z15-5D III (HD) Canon EOS 5D DSLR 1.53Kg £2,831

Solutions

The 3DRobotics Iris+ Quadcopter has a payload capacity of 400g which would allow a rather small 15g for a mount to attach a Sony A5000 DSLM or 68g to attach a Sony QX1 Lens-Style Camera without weighing too much, although the system could be flown with excess weight reducing the flight time. A downward facing 3D Printed Sony A5000 Mapping Mount  is available for both the Iris+ Quadcopter and X8+ Octocopter, it weighs 36g.

Although the X8+ is a octocopter by definition, it gets over the intrinsic problems of size, weight and cost caused by eight separate arms by having two rotors on each arm, one pointing up and the other downwards. With a maximum payload of 1KG it can carry a Sony A5000 DSLM camera (388g) together with a gimbal such as the DYS 3 Axis Brushless Gimbal for Sony NEX size cameras (609g) to support and move it, the gimbal is designed for the NEX range of cameras, but they are almost identical to the A5000 in design. Although a lighter mount could be used.

3 Axis Brushless Gimbal for Sony NEX size cameras

3 Axis Brushless Gimbal for Sony Nex size cameras

Conclusions

The mirror-less camera would seem to provide a solution to the problem of how to carry a high specification camera capable of capturing high quality images on a fairly low-cost UAV solution.

Sources

http://en.wikipedia.org/wiki/Mirrorless_interchangeable-lens_camera

http://www.dummies.com/how-to/content/gopro-cameras-understand-the-cameras-limitations.html

http://www.japantimes.co.jp/news/2013/12/30/business/mirrorless-cameras-offer-glimmer-of-hope-to-makers/

Quadcopter vs Hexacopter vs Octocopter: The Pros and Cons

Autonomous Systems Launch Event – University of Southampton

On the 20th of March I was present at the Launch Event for the Autonomous Systems USRG (University Strategic Research Group) at the University of Southampton.

It included a number of 3 minute presentations some of which were very pertinent to the autonomous recording of Archaeology and Cultural Heritage.

Control and Implementation of Autonomous Quadcopter in GPS-denied Environments – Chang Liu

Chang Liu is a PHD student in the department of Engineering.

He has been working in using Optical flow technology and ultrasonic sensors to control the velocity of UAVs using his own autopilot system in an environment without the ability to use GPS.

He is currently perfecting a system using a monocular uEye camera and Quad Core Linux ODROID computer running the ROS (Robot Operating System) using SLAM (Simultaneous Localization And Mapping) algorithms to enable the single camera to identify natural features and act as a more advanced position hold technology.

DSC_0193

Chang Liu’s Autonomous Quadcopter

DSC_0195

Chang Liu’s Analysis Software

 

Autonomous UAVs for Search and Rescue and Disaster Response

Dr. Luke Teacy of the Department of Electronics and Computer Science (ECS) discussed the use of autonomous UAVs in search and rescue and disaster response by co-ordination between multiple platforms. Their low cost and the fact that they are easy to deploy make them ideal for the purpose.  Camera equipped UAV can search for someone in the wilderness using computer vision to see the person. He is using observation modelling to see how the view of a person is effected by distance and how to maximise the information required to find a person. And discussed the issues of how to control UAVs and allocate them to a task using the Monte Carlo Tree search algorithm as well as path planning.

Human Agent Collaboration for Multi-UAV Coordination

Dr. Sarvapali Ramchurn of the Department of  Electronics and Computer Science (ECS) discussed the MOSAIC (Multiagent Collectives for Sensing Autonomy Intelligence and Control) Project. His work involved the allocation of tasks to UAVs, which may have different capabilities, by a human-agent.  Once a task is set the nearest drone will move to complete the task. By teaming the UAVs up to accomplish tasks it maximises efficiency. If a UAV fails a new one takes it place, while when new tasks are allocated a UAV is reallocated to the task.  He discussed using the Max-Sum algorithm to coordinate tasks between the UAVs autonomously.

An intelligent, heuristic path planner for multiple agent unmanned air systems

Chris Crispin is a PhD program in Complex Systems Simulation and is part of the  ASTRA environmental monitoring Project. It involves a group of unmanned vehicles co-coordinating with each other in the mapping. Feasible optimal flight paths are designated and searched along. Once the UAVs begin flying areas are designated with a level of uncertainty by a central computer, and this determines whether a UAV is sent to this area, with the higher the uncertainty the more likely a UAV will be dispatched to map it.

The UAVs use an ODROID-C1 Quad Core Linux computer using a a PX4 Autopilot, while the control computer is an ODROID-XU3. The system uses the JSBSim open source flight dynamics model (FDM).

https://sotonastra.wordpress.com/

Archaeology and autonomous systems

Dr. Fraser Sturt of the Department of Archaeology discussed the various potential applications of autonomous systems in archaeology. This included survey and site identification by high resolution photographic and topographical mapping. He also discussed the benefits of multi-spectral imaging in seeing adobe (mud brick) structures, and how the results have shown that rather than collapsing the Nasca civilisation had moved to the coast. Next he discussed the potential of GPR (Ground Penetrating Radar) being carried on UAVs. And finally discussed the fact that there are approximately 3 million shipwrecks worldwide which need to be studied/made stable.