The recording of buildings is an important area in Cultural Heritage, whether for conditional surveys or to record something that is about to be destroyed.
Traditional methods rely upon survey equipment such as Total Stations to take a number of points on the façade, but this results in only points and lines with no great surface detail.
Other more detailed survey techniques such as laser scanning and photogrammetry have also been employed. But laser scanning is expensive and both the techniques are generally ground based missing detail of the façade that is not visible from this position. Scaffolding or a cherry picker can be used to record the whole of the building but again this can add to the cost the recording.
Photogrammetry is a low cost method of producing high quality results but relies upon having the camera parallel to the building to produce the best results, as capturing photographs from an angle brings inaccuracies into the recording as well as there being more detail at the bottom of the 3D model created than at the top.
The UAV would seem to provide an ideal platform to carry a camera parallel to the building, recording photographs with the required photogrammetry overlap. And with its autopilot it would seem possible to automate the recording process allowing the mapping of the façade in the same way that the UAV can map the ground.
There are of course a number of problems that need to be overcome.
Building Façade Recording
Manual
Building façade recording can be done manually with a UAV, but the larger and more complicated the building façade the mode difficult it is to do this accurately. As the pilot needs to control the UAV accurately in 3 dimensions as well as controlling its speed.
Although the results for an experimental UAV mission are acceptable the difficulty of maintaining a manual position can be seen in the image below.
Bedham Mission Church – UAV Photogrammetry Model
Bedham Mission Church – UAV Camera Recording Positions
Automatic
In order to automate the process you need to determine what parameters are required to record a building façade using photogrammetry.
These can be seen below.
Building facade recording parameters
First experimentation was done by taking the co-ordinates of the two ends of an example the wall from Google Earth (The south facing wall of the lay brothers’ quarters at Waverley Abbey in Surrey was used). These co-ordinates can then be used to determine the bearing that the wall lies upon and its width. Using the camera parameters and level of detail the required distance from the wall for the flight can be calculated using trigonometry. Trigonometry is once again used to calculate the offset positions for the left and right extent of the flight.
Trigonometry used for calculations
The image overlap can be used to determine the number of photographs required in the horizontal and vertical, and hence the change of altitude that is required for each flight pass of the building.
Calculate altitude
Although it is planned to have the ability for the UAV to hover and take photographs, it is much easier to have it take photographs as it flies across the building façade. This requires the additional calculations and control of optimum flight speed and shutter speed to take photographs which are not adversely effected by motion blur.
Shutter speed formula
Shutter speed calculations
These preliminary calculations were done in Microsoft Excel.
DroneKit
The drone manufacturer 3DR provides a series of software development kits (SDKs) for writing applications to control your UAV using one of the open-source autopilot systems they support.
DroneKit Python uses the Python programming language and provides a number of examples to help with programming the flight of a UAV; these include flying from co-ordinate to co-ordinate up to complete missions. Together with this there is an API (application program interface) reference which provides all of the Python commands that can be used to control the UAV.
Python
Python is a fairly easy to learn programming language and as DroneKit already requires it to be installed and setup it makes sense to use the same language to calcuate the required paramaters for the flight path. This was done with the aid of a number of online resources. A graphical user interface (GUI) was created using the Tkinter Python package and was used to enter the data. The python code did the calculations then a file is exported which combines these calculations with the DroneKit code for controlling the autopilot. The final file when run will control the UAV flight.
Python GUI
Virtual Drone
Experimentation doesn’t need to be done with a live UAV, it can actually be done with a virtual one using a number of pieces of open-source software. These include Mission Planner, ArduCopter, MAVProxy and SITL (Software in the loop)
Virtual Drone
Next Steps
Experimentation with a UAV using the hardware and software is the next step to test whether a GPS can be used in close proximity to a structure.
Limitations of standard UAV GPS accuracy to within the range of meters also complicates the use of this method of controlling the flight. This either needs to be solved with the use of a more accurate GPS (although the proximity to the building may block the signal), sensors that measure distances or the use of computer vision technologies to control the UAV position. The UAV afterall currently only need to fly between two set points then at set altitudes above the ground.
The HoloLens was originally announced on 21 January 2015.
But it was the systems appearance at the E3 expo in America that demonstrated its abilities, including those for playing minecraft in an innovative way.
Although similar in appearance to Virtual Reality goggles the system works in a completely different way, by projecting virtual components (holograms) into a view of the real world, this is called Augmented reality.
It is believed that the HoloLens will cost between £300-£600. It is not likely that it will be available before 2016.
Potential
This system has great potential for a number of areas within Archaeology and Cultural Heritage including virtual museums, where 3D recreations of artifacts are visible to the viewer next to fragments. Or for site tours/or visits to sites of historical interest where a 3D recreation model of the site is visible on top of the excavated remains.
Limitations
Those who have used it have suggested that it has a limited field of view for the Immersive elements. And it is quite expensive.
Photogrammetry can provide a low cost method of recording Archaeological Excavations and Cultural Heritage. But how can parts of the 3D models created be selected and annotated with information about what context/architectural element they are and other important details? The answer is annotating parts of a point cloud or mesh with metadata, but what is needed to do this and what capabilities are currently in existence? And what can be done with this information once it has been created?
In archaeological excavations, the excavation could be reviewed by being able to display contexts by their metadata or by the date they were excavated showing the progress of the excavation. While in Cultural heritage 3D models recorded at different times could be viewed by the user by date, in an interface similar to that used in google earth, allowing comparison in preservation of areas of a monument.
Metadata
Metadata by definition is data about data, in respect to photogrammetry it adds additional information to the photographs and 3D models.
Metadata has a number of important uses relevant to 3D files within Archaeology and Cultural Heritage:
The first is to document the source files, the files created and the processes used to create those files. This records the methods and determine a best practice.
The second, linked to the first, is the ability to use metadata to search a repository of 3D models.
A third is the annotation of 3D models. This will allow point clouds and meshes to be annotated and this information used in the display of aspects of the model when required.
Photographic Metadata
Photographic metadata is the first step in the recording of the photogrammetry process, it comes in more than one type.
EXIF
EXIF (Exchangeable image file format) Metadata can be saved in JPEG and TIFF file formats (although there are differences) and the various camera RAW formats. It carries the information recorded by the camera while taking each photograph.
With advances in camera geolocation thanks to built-in GPS receivers in cameras this data is now also included within the EXIF metadata.
IPTC (International Press Telecommunications Council)
IPTC IIM (Information Interchange Model) Metadata was designed by the IPTC (International Press Telecommunications Council) to interchange news data, it adds to the information stored in EXIF such as copyright and creator information.
XMP
XMP (Extensible Metadata Platform) was developed by Adobe and introduced in 2001, it has merged with IPTC allowed the embedding of metadata in many more file formats.
Because XMP is written in the extensible scripting language XML (Extensible Markup Language) custom metadata types can easily be created storing information pertinent to an Excavation or object element of Cultural Heritage significance.
In the case of an Archaeological Excavation the information contained within a site photographic record could be contained within each photographic image file, with information such as:
Photograph Number.
Company name.
Site Name.
Site Code.
Context Number.
Photographic scales used.
What direction the photo was taken.
This would mean that even if the photographs were separated from the rest of the archive their provenance could easily be rediscovered. The metadata stored in the files should be linked to a number of different standards that have been devised for the description of including that provided by the ADS (Archaeological Data Service).
3D Model Metadata
With the growing importance of the 3D model within Archaeology and Cultural Heritage work has been conducted on the creation of standardised metadata for the recording of both these models and their creation process. Although there are now a number of solutions no one has been wholly accepted as many projects have developed their own schema according to the nature of the 3D objects that they focus on. There has however been a great deal of work on mapping between the various schemas attempting to make the data interoperable.
Metadata is very important for 3D models as it allows the steps taken to produce the finished model to be easily retraced and to enhance them to a required end result. Changing only one of the parameters recorded could alter the final result.
ADS (Archaeological Data Service)
The ADS has published a number of Guides to Good Practice which includes suggestions for metadata standards associated with Close-Range Photogrammetry and UAV Survey, these include:
Project-level metadata.
Image-level metadata.
Camera-level metadata.
Survey-level metadata.
Processing metadata.
Reference/Datum metadata.
Model metadata.
3D-COFORM
The 3D-COFORM consortium of museums, universities and heritage bodies has done a large amount of work on the standardisation of metadata for the documentation of 3D data in Archaeological and Cultural Heritage, its integration with other data sources and its dissemination. They have also worked on a storage solution for complete archives of data as well as software for processing 3D date from capture through processing, to storage and dissemination.
CIDOC-CRM (Conceptual Reference Model)
CIDOC-CRM ontology (conceptual idea of the structure of the data) provides a framework for describing the relationships and concepts use in the documentation of Cultural Heritage.
Simpler schemas have been created that map to the CIDOC-CRM schema allowing interaction between parts of it but without all of the complexity of the whole model.
CRMdig
CRMdig is an ontology and RDF (Resource Description Framework) Schema (basic RDF elements describing the ontology). RDF is a type of XML metadata.
It encodes the metadata describing the provenance (methods of production) of the various stages of creation of 2D, 3D and animated models. It is an extension of CIDOC-CRM as part of the 3D-ICONS Project. Both the CIDOC-CRM ontology and the CRMdig extension have been used in the Greek national project 3D-SYSTEK which is concerned with recording the provenance of 3D models.
CRMarchaeo
CRMarchaeo is another ontology and RDF Schema designed to encode metadata about the archaeological excavation process. It records stratigraphic information on each and maximises the interpretation capabilities, determines whether further excavation is required, allows the re-evaluation of the results, comparison with previous excavations and comprehensive statistical studies.
It has also been incorporated in to the ARIADNE Project which was designed to bring together existing digital datasets under a common interface allowing integrated access by researchers to the archives.
CARARE
The CARARE schema is based upon the English Heritage MIDAS Heritage (UK Historic Environment Data Standard) which is the British cultural heritage standard for recording archaeological sites, battlefields, buildings, parks and gardens, shipwrecks, other areas of interest and artefacts. CARARE is only concentrated upon monuments and information pertaining to the monument. It is also based upon the POLIS DTD (Document type definition) for monument inventories. It is designed to create metadata interoperable with the Europeana internet portal.
The CARARE schema wraps one or many CARARE records. The CARARE start element wraps the heritage record which includes:
The Heritage asset includes the metadata about the monument.
The Digital resource includes the metadata about a digital resource.
The Collection information includes the collection-level description.
The Activity includes metadata about an event or activity.
STARC (Science and Technology in Archaeology Research Centre)
The STARC schema is concerned with the documentation of archaeological assets including artefacts, architecture and archaeological sites. It stores its information in a local STARC repository.
It is based upon LIDO and CARARE schemas and is compliant with CIDOC-CRM. The LIDO (Light Information Describing Objects) schema is concerned with the recording of objects.
The schema has four main wrappers under the global wrapper of PROJECT:
Project Information – with the sub-wrappers of administrative information, cultural heritage asset, digital resource provenance, activities.
Cultural Heritage Asset – with sub-level wrappers of general information, Cultural Heritage subset.
Digital Resource Provenance – with sub-level wrappers of acquisition, processing and publication.
Activities – which describes all of the activities related to the digital object such as acquisition, survey, reconstruction.
3D model metadata/annotation problems
There are similar problems with 3D model metadata as with photographic metadata; metadata that isn’t included with the pertinent file can get lost losing all of the information about the file. But whereas the photographic industry has to a certain extent supported the idea of standardising metadata and its inclusion within files, partially due to the requirements of many of the industries involved; the requirements of the Cultural Heritage industry to fully document the recording process and securely digitally archive the results are not required by many other industries. So although work is at an advanced stage in the creation of metadata schemas to record the information there is little provision for the embedding of metadata within 3D file formats. Although there is provision for the inclusion of 3D models and metadata within repositories. So much metadata for 3D models are included in external files.
There are however some 3D formats that have been created with the intention of creating a standard for 3D files. One of these, Collada, was begun by Sony Computer Entertainment as a way to exchange digital assets. It has an XML database schema which allows files to be exchanged between programs without loss of data and the potential of adding annotation metadata within the file. This and the fact that it is an ascii (can be read easily by a human) file format this allows metadata to be easily integrated within the file. An extension to the format allows the definition of 3D areas.
The X3D file format is another XML based storage solution developed from the Virtual Reality Modelling Language (VRML), it can store 2D and 3D graphics, CAD data, animation, spatialized audio and video and cameras. The Web3D consortium has developed a Javascript interface which allows X3D to be supported natively within an HTML 5 web page without the need of a plugin.
The metadata schemas for the creation of metadata for 3D models have led to the development of more than one solution for annotating 3D models.
There are two main strategies towards the annotation of 3D files:
Persistent annotation is where the annotation is stored within a database.
Transient annotation is where the annotation is stored within the file using such formats as VRML/X3D and COLLADA.
There are also a number of requirements for an interface which can annotate point clouds and meshes:
An ability to alter the viewpoint to allow a good view of the region required.
An ability to select the region to be annotated.
An ability to create a text entry for the annotation, and this will also need to be in a metadata schema appropriate for the model.
3D model repositories and annotation
The requirement for digital archives for the 3D models has led to one solution allowing both metadata and annotation to be undertaken.
This is a solution favoured by the museum community which already has large amounts of information stored that requires integration, they have created online repositories of 3D models which allow multiple users to annotate the models. With annotation budgetary issues they have also allowed the general public to annotate the models.
The information in these repositories needs to adhere to the metadata standards already discussed allowing searching and retrieval when required.
3D-COFORM
The 3D-COFORM consortium have developed a collection of tools aimed at advancing 3D digitisation and documentation and archiving of the products; with a complete pipeline from recording through processing to archiving and dissemination.
The Meshlab software allows the processing of meshes and their visualisation.
The Arc 3D Webservice allows the upload of images and the creation of a 3D model from them.
The Ingestion Tool enables the interactive creation of metadata using appropriate forms at each stage of a 3D reconstruction process, this metadata is then integrated with the 3D-COFORM repository. It also aids users in gaining familiarity with the CIDOC-CRM.
The Repository Infrastructure is a digital storage solution for 3D models and metadata, it is searchable using text-based queries, shape and material. It supports the storage of 3D objects within its Object Repository (OR) and metadata and annotation within its Metadata Repository (MR). The Object Repository is implemented on top of parts of the open source GLOBUS Grid Software Environment, while the Metadata Repository was created using a Sesame RDF Repository. The system is based upon CIDOC-CRM and CRMdig.
A Legacy Mapping Tool allows mapping of standard relational database management systems to the 3D-COFORM Repository Infrastructure allowing integration of these data sources.
The Integrated Viewer/Browser (IVB) attaches to the Repository Infrastructure enriching the information stored by adding information to the object, including relationships between different objects, browsing the collection and viewing the 3D artefacts. It also allows the segmentation and definition of areas of the objects which are then stored back into the Repository Infrastructure. The tool has three main sections which display the 3D view, annotation tools and metadata.
The Annotation of 3D models can be done with this tool once they have been uploaded to the Repository Infrastructure, with their metadata created in the Ingestion Tool. An area is selected using either a sphere, cylinder or segment; it is then annotated with either a comment or relationship between multiple areas.
Once an area is annotated this is propagated to all representations of the 3D object within the Repository Infrastructure. In order to do this the METS metadata schema (descriptive, administrative, and structural metadata regarding objects) is used to wrap 3D files in the COLLADA file format. This is useful as it could record the condition of sculptural elements over time, and the different models could be viewed side by side in the application.
Queries can include searches for all 3D models of a certain date or related to a certain recording exercise.
This work was continued by some of the researchers in the 3D-SYSTEK (3DS) Project, the system has the ReposIt and BrowsIt tools. ReposIt includes simple to fill in tabbed input forms recording what the data is, how it was produced and which input/setup was used; and the automatic generation of metadata from validated form fields.
The BrowseIt tool allows searching, browsing, annotating and the downloading of 3D models.
It requires no further installation as it uses the abilities of HTML5 combined with Java Servlets technology to build the user interface.
The Site Explorer tool allows multiple objects to be retrieved from a repository and displayed together in an interactive 3D environment, these can include digital terrain models, metadata and photographs.
3DSA
The 3DSA (3D Semantic Annotation) is a web-based semantic tagging and annotation service for 3D cultural heritage objects.
It is written in HMTL 5 and WebGL allowing it to run natively within a web browser without the need to install a plugin.
The system is based upon the Open Annotations Collaboration (OAC) model, which is concerned with scholars annotating a 3D model as an aid to memory, to add a commentary or to classify; and is aimed at the cheaper provision of metadata for museums via social tagging. This has been extended using X3D fragment identifiers. But has far reaching implications for annotation earlier in the chain of recording.
The application allows the selection of points, surface regions or sub-parts of 3D objects of .ply 3D file (Polygon File Format). The X3D file format is used for the storage and identification of the 3D segments which get a unique URI (Universal Resource Identifier) when they are created.
The annotation created are stored in a Sesame RDF Repository which is linked via X3D fragment identifiers and published online with a HTTP URI’s, this links to the 3D digital objects which are stored within an online Fedora Repository, the creation, updating, deleting and querying of annotations is provided by the Danno API (Application Program Interface).
An annotation can be chosen from a list of popular tags from ontology or a new one can be created. It is possible to annotate multiple objects at the same time with the same annotation.
It has the ability to recognise the shape of the object from a database of 2D object shapes and determine what type of pot it is.
3D model annotation within 3D software
An ideal solution to the annotation of point clouds and meshes would be being able to do it within the photogrammetry software that creates them, but sadly this is not currently possible.
Photogrammetry Software
Photogrammetry software such as Agisoft Photoscan and Pix4DMapper use proprietary file formats, these closed formats are strictly tied to the software that creates them. Even when the 3D models are exported to another format many of these are also closed formats. Any solution of writing metadata to a file format might be overwritten by changing to another format.
The proprietary file formats within 3D software in general have been blamed for impeding the creation of a universally accepted open standard.
These programs have the ability to select parts of point clouds and meshes with the view to masking or removing them from the 3D model; this ability could be used to annotate these parts of the file as well.
Within Pix4DMapper software it is possible to select areas of the point cloud and create a Point Group which can be turned on and off at will but apart from naming the group there are no options to add metadata to them. The Point Groups can also be exported. The software also creates a project folder which contains all of the photographs, the various stages of the processing files and a report of the whole process in .pdf and .html formats although the 3D models created are in the proprietary file format.
3D PDF
As with photography metadata, Adobe were at the centre of an attempt to create a standard in 3D PDF, it allows the annotation of 3D models using the U3D file format stored within the PDF. But it uses proprietary tools, the annotations can only be attached to a single point and they cannot be read from outside of the PDF document.
Meshlab
Meshlab is a program for working on unstructured triangular meshes and can form an important part in the open source processing of photogrammetry data.
As it is supported by the 3D-COFORMS consortium it would be the ideal place to annotate 3D models using the metadata standards developed by the consortium before they are uploaded to a repository to store and disseminate them.
3DUI Competition 2014
The requirements of the 2014 3DUI (User Interface) Contest at the IEEE (Institute of Electrical and Electronics Engineers) Symposium on 3D User Interfaces was to design a system to annotate 3D point clouds. It included a number of novel solutions to the 3D point cloud annotation requirements.
Slice-n-Swipe
The Slice-n-Swipe system by Virginia Tech uses a leap motion controller to track the position of one hand while the rotation of the point cloud is controlled by the other using a 3D mouse. As the selection of points within a point cloud can be complicated the system uses the concept of progressive refinement, where a large area is selected then gradually refined down to the actual points that are required with the camera view zooming into the reduced point cloud allowing more accuracy.
The system has a number of different techniques for the selection of parts of a points cloud:
The first technique uses a chef’s knife metaphor with one finger being tracked by the leap motion controller as it slices through the point cloud; this takes into account the limitations of the leap in tracking a whole hand. A more precise method is provided by a cut using a rubber band metaphor which gives a preview of the cut before it is finalised.
A second technique uses a bubble controlled with two fingers to brush across the surface of the point cloud selected the required points swiping away what is not required, opening and closing the two fingers controls the size of the bubble.
The final technique uses a lasso tool to draw around the area required.
The selected points can then be annotated with text which will appear next to the point cloud, a selected group of points can have multiple annotations.
Bi-Manual Gesture Interaction for 3D Cloud Point Selection and Annotation using COTS
This systems from the Escola Politecnica da Universidade de Sao Paulo uses a leap motion controller to track gestures which virtually flies a sphere around the point cloud within the Blender open source software. The point cloud is selected using a second hand which is tilted approximately 45 degrees clockwise.
The point cloud is then annotated within Blender by grouping them.
The Point Walker Multi-label Approach
The system developed by the Instituto de Informatica (INF), Universidade Federal do Rio Grande do Sul (UFRGS) involves walking through the point cloud. This is done with a weight platform on which the user walks in place. They wear a VR (Virtual Reality) Headset which tracks their head movement.
It uses a smartphone as a pointing device to first coarsely select an area, then gesture control is used to squeeze the ellipsoidal selection area. Hierarchical label levels are navigated by leaning backwards and forwards on the weight platform and selected with the smartphone, while new labels are created by voice command.
Touching the Cloud: Bimanual Annotation of Immersive Point Clouds
The system developed by the Immersive Media Group Wurzburg, Germany consists of an Occulus Rift VR headset for visualisation and a PrimeSense Carmime 1.09 Depth sensor which reads hand gestures using the 3Gear SDK (Software Development Kit). The point cloud is visualised using the Unity3D game engine.
The Point Cloud can be manipulated using gestures such as rotation, scale and translation.
It can be selected by touching it, with a customisable tolerance radius being set up to determine the amount of points selected.
The point cloud can be annotated using speech recognition software or from a pre-defined list. The hierarchy of annotation is displayed next to the point cloud.
Go’Then’Tag (GTT)
The Go’Then’Tag tool-set by Holo3 and ICube is a combination of a computer running the software developed in Unity3D and MiddleVR on Windows 7 and a mobile device running Android.
The system automatically divides the 3D model into sub-parts based on the geometry of the point cloud using an algorithm, this can then be progressively refined. Points can be selected with a pencil selection tool using simple geometric shapes, the shapes being attached to a ray controlled by the 6-DOF (Degree of Freedom) tracking of the mobile device. They can be either removed from a selection or added to a new group of points.
The touch screen of the device is used for the definition of and editing of the hierarchy of tags. Through the visual interface the hierarchy of tags can be turned off, modifies or moved.
3D Point Cloud Object Detection
A completely different method of annotating areas of 3D point files is the ability of some software to identify 3D objects within a point cloud.
A number of researchers have begun work on the automatic identification of the objects within Laser Scan and Lidar point clouds, including the detection of buildings, car models and railway infrastructure.
Building Information Modelling (BIM)
Another method of labeling point clouds is provided by Building Information Modelling, which is the 3D modelling of buildings using digital assets which can be altered at any point by user intervention or automatically update; it is designed for the AEC (Architecture, Engineering and Construction) industry allowing the modelling of architecture.
By using a set of standardised parametric (parameters and rules governing them) objects within the software a 3D model can be created from a point cloud, this set of pre-defined objects can be modified or added to with custom parametric objects allowing complex industry specific geometry to be created.
One of its great benefits for the Cultural Heritage Industry is its ability to map the interactive parametric objects directly onto a laser scan point cloud.
The industry leaders in BIM software are Autodesk Revitt, GraphiSoft ArchiCAD and Bentley Architecture.
Historic building information modelling (HBIM)
Historic Building Information Modelling (HBIM) is an expansion of BIM, it consists of a library of parametric architectural element objects which are based upon the work of the Roman Architect Marcus Vitruvius Pollio, his interpreters, the works of the Renaissance Architects and 17th, 18th and 19th century architectural pattern books.
The parametric objects were modeled within the GraphiSoft ArchiCAD software using the GDL Geometric Descriptive Language (GDL), the parameters were scripted so as to create dynamic objects which can be used more than once. These objects can be stored in internal libraries or databases for later reuse or alteration.
The aims of HBIM are to enable automatic conservation documentation from laser scan data and image based building surveys by creating engineering drawing and schedules (energy, cost decay etc.) allowing precise conservation of Cultural Heritage Architecture. The 3D documentation extracts a great deal of information from the point cloud data including orthographic projections and sections.
Work has already begun on the creation of libraries of objects for other Architectural Periods such as the Romanesque which as well as recording these structure has the potential for the virtual reconstruction of buildings by using the parametric objects created to replace what may have been lost from a structure. The objects were created by using the point clouds of the objects in existence.
Within the ArchiCAD software custom Industry Foundation Classes (IFCs) can be set up where metadata about an object created from the Point Cloud can be created. Custom Project Info fields can also be created allowing metadata information at the Project level to be incorporated into the file.
This technology can of course only be used for the recording of standing buildings or partial remains within an archaeological excavation.
Conclusion
Architecture
The BIM technology has the capability to identify architectural elements from point clouds of Cultural Heritage Architecture allowing the elements from the point cloud to be annotated and grouped together with pertinent metadata from an architectural schema to be recorded in detail. The relevant parametric objects for the architectural style would need to be created using GDL. The point cloud could then be saved in a format such as COLADA for upload and inclusion within a repository.
There is also the possibility of having the parametric objects stored within a repository and in a similar technique to the 3DSA project using BIM parametric objects within it to semi-automatically identify architectural elements from a period assigned to the object.
This has great potential for the conservation of buildings where 3D models created at different times could be quickly compared by having annotations which replicate across the various models of the same architectural element.
Archaeological Excavation
With slight alterations it would be possible to integrate the CRMarchaeo schema and the CRMdig schema and record metadata of 3D models of archaeological contexts, it would be possible to label 3D models of contexts at their various levels of processing with their context number.
The various 3D models could then be uploaded to a repository where a tool such as Site Explorer could be used to display the various 3D models and other data from the excavation by context number as it currently possible to do by cultural occupation period. With the acceptance of a metadata standards or a mapping between the various schemas within digital archives it would be possible to display and query 3D models and other data from more than one archive at the same time integrating diverse information. All of the information from the various phases of an excavation could then be viewed within one interface getting its data from the repository.
Software
Photogrammetry software currently has the ability to select parts of a model such as points or parts of a mesh, with an update to the software it would be able to write metadata information at the various stages of processing and export it, whether as an external file or within a file format such as COLLADA or X3D.
It would be possible to design software which can read in a 3D file exported from the photogrammetry software (in COLLADA or X3D file formats) and using a metadata schema write all the pertinent information. A slightly altered 3D-COFORM ingestion tool could be designed which allows the upload of 3D models with existing metadata into a repository. Work done for the 3DUI competition has demonstrated that the selection and annotation of point clouds in stand-alone applications is possible with low cost sensor and Game engine technology, the inclusion of metadata annotation and export would be the next step.
Repository
Although it would be more beneficial to annotate 3D models as they are produced rather than after they are uploaded to a repository this method does currently provide a working solution. It is also an important part of the archiving and dissemination of the information; and provides an interface to integrate information from excavations of a site decades apart with the latest 3D data from an excavation.
Final Words
The potential of 3D model annotation at the point and mesh levels is almost realised with it being possible to already annotate architectural models within the Integrated Viewer/Browser attached to a Repository Infrastructure. Other techniques such as BIM may potentially add to the abilities and provide additional information for the archive.
Archaeological 3D context models could reach a similar stage with some slight alterations to metadata schemas and The Site Explorer tool allowing links between 3D models and archaeological stratigraphy.
The 3DUI competition tools have demonstrated innovative methods of splitting and annotating point clouds which could be encompassed into the next generation of annotation tools.
Bibliography
Axaridou, Anastasia, Ioannis Chrysakis, Christos Georgis, Maria Theodoridou, Martin Doerr, Antonios Konstantaras, and Emmanuel Maravelakis. “3D-SYSTEK: Recording and exploiting the production workflow of 3D-models in Cultural Heritage.” In Information, Intelligence, Systems and Applications, IISA 2014, The 5th International Conference on, pp. 51-56. IEEE, 2014.
Bacim, Felipe, Mahdi Nabiyouni, and Doug A. Bowman. “Slice-n-Swipe: A free-hand gesture user interface for 3D point cloud annotation.” In 3D User Interfaces (3DUI), 2014 IEEE Symposium on, pp. 185-186. IEEE, 2014.
Boeykens, Stefan, and Elena Bogani. “Metadata for 3D models. How to search in 3D model repositories?.” ICERI 2008 Proceedings (2008): 11p.
Cabral, Marcio, Andre Montes, Olavo Belloc, Rodrigo Ferraz, Fernando Teubl,
Fabio Doreto, Roseli Lopes, and Marcelo Zuffo. “Bi-manual gesture interaction for 3D cloud point selection and annotation using COTS.” In 3D User Interfaces (3DUI), 2014 IEEE Symposium on, pp. 187-188. IEEE, 2014.
Chevrier, Christine. “Semiautomatic parametric modelling of the buildings on town scale models.” Journal on Computing and Cultural Heritage (JOCCH) 7, no. 4 (2015): 20.
Doerr, Martin. “The CIDOC CRM, an Ontological Approach to Schema Heterogeneity.” Semantic interoperability and integration 4391 (2005).
Doerr, Martin, Katerina Tzompanaki, Maria Theodoridou, Christos Georgis, A. Axaridou, and Sven Havemann. “A Repository for 3D Model Production and Interpretation in Culture and Beyond.” In VAST, vol. 2010, p. 11th. 2010.
Doerr, Martin, and Maria Theodoridou. “CRMdig: A Generic Digital Provenance Model for Scientific Observation.” In TaPP. 2011.
Felicetti, Achille, and Matteo Lorenzini. “Metadata and tools for integration and preservation of cultural heritage 3D information.” In Proceedings of the 23rd international CIPA symposium, Prague, pp. 12-16. 2011.
Hmida, Helmi Ben, Christophe Cruz, Frank Boochs, and Christophe Nicolle. “From 3D Point Clouds To Semantic Objects An Ontology-Based Detection Approach.” arXiv preprint arXiv:1301.4783 (2013).
Koller, David, Bernard Frischer, and Greg Humphreys. “Research challenges for digital archives of 3D cultural heritage models.” journal on computing and cultural heritage (JOCCH) 2, no. 3 (2009): 7.
Krammes, Hernandi, Marcio M. Silva, Theodoro Mota, Matheus T. Tura, Anderson Maciel, and Luciana Nedel. “The point walker multi-label approach.” In 3D User Interfaces (3DUI), 2014 IEEE Symposium on, pp. 189-190. IEEE, 2014.
Lubos, Paul, Rudiger Beimler, Markus Lammers, and Frank Steinicke. “Touching the Cloud: Bimanual annotation of immersive point clouds.” In 3D User Interfaces (3DUI), 2014 IEEE Symposium on, pp. 191-192. IEEE, 2014.
Maravelakis, E., A. Konstantaras, A. Kritsotaki, D. Angelakis, and M. Xinogalos. “Analysing user needs for a unified 3D metadata recording and exploitation of cultural heritage monuments system.” In Advances in Visual Computing, pp. 138-147. Springer Berlin Heidelberg, 2013.
Maravelakis, E., A. Konstantaras, K. Kabassi, I. Chrysakis, C. Georgis, and A. Axaridou. “3DSYSTEK web-based point cloud viewer.” In Information, Intelligence, Systems and Applications, IISA 2014, The 5th International Conference on, pp. 262-266. IEEE, 2014.
Murphy, Maurice, Eugene McGovern, and Sara Pavia. “Historic building information modelling (HBIM).” Structural Survey 27, no. 4 (2009): 311-327.
Murphy, Maurice, Eugene McGovern, and Sara Pavia. “Historic Building Information Modelling–Adding intelligence to laser and image based surveys of European classical architecture.” ISPRS journal of photogrammetry and remote sensing 76 (2013): 89-102.
Pătrăucean, Viorica, Iro Armeni, Mohammad Nahangi, Jamie Yeung, Ioannis Brilakis, and Carl Haas. “State of research in automatic as-built modelling.” Advanced Engineering Informatics 29, no. 2 (2015): 162-171.
Pitzalis, Denis, Franco Niccolucci, M. Theodoriou, and Martin Doerr. “LIDO and CRM dig from a 3D cultural heritage documentation perspective.” In Proceedings of the 11th International conference on Virtual Reality, Archaeology and Cultural Heritage, pp. 87-95. Eurographics Association, 2010.
Ronzino, Paola, Nicola Amico, and Franco Niccolucci. “Assessment and comparison of metadata schemas for architectural heritage.” Proc. of CIPA (2011).
Ronzino, P., S. Hermon, and F. Niccolucci. “A metadata schema for cultural heritage documentation.” V., CApellini (ed.), Electronic Imaging & the Visual Arts: EVA (2012): 36-41.
Strohmeier, Felix, Jorge López de Vergara, Javier Aracil, Alfredo Salvador, József Stéger, István Csabai, Gábor Vattay et al. “D. 3.2: Final specification of the Unified Interface.”
Theodoridou, Maria, Yannis Tzitzikas, Martin Doerr, Yannis Marketakis, and Valantis Melessanakis. “Modeling and querying provenance by extending CIDOC CRM.” Distributed and Parallel Databases 27, no. 2 (2010): 169-210.
Veit, Manuel, and Antonio Capobianco. “Go’Then’Tag: A 3-D point cloud annotation technique.” In 3D User Interfaces (3DUI), 2014 IEEE Symposium on, pp. 193-194. IEEE, 2014.
Visintini, Domenico, Eliana Siotto, and Elena Menean. “3D modeling of the St. Anthony Abbot Church in S. Daniele del Friuli (I): From laser scanning and photogrammetry to Vrml/x3D model.” In Proceedings of 3rd ISPRS International Workshop,(3D-ARCh). 2009.
Yu, Chih-Hao. “Semantic annotation of 3D digital representation of cultural artefacts.” TCDL-Bulletin of IEEE Technical Committee on Digital Libraries 6, no. 2 (2010).
Yu, Chih-Hao, Tudor Groza, and Jane Hunter. “High speed capture, retrieval and rendering of segment-based annotations on 3D museum objects.” In Digital Libraries: For Cultural Heritage, Knowledge Dissemination, and Future Creation, pp. 5-15. Springer Berlin Heidelberg, 2011.
Google has announced that their Street View Trekker backpack will be available to borrow by organisations including tourism boards, non-profit organisations, government agencies, universities or research groups. The Trekker bacpack was designed to enable the recording of areas of the world where the Google Street View Car could not reach, so that the imigary could be incorporated in Street View within Google Maps.
The Street View Trekker backpack consists of a dome of 15 5-megapixel digital cameras which record images every 2.5 seconds as a person walks forward, two GPS receivers which log the location data, two SSD (Solid State Drivves) which store the data and dual lithium batteries which allow 8 hours of recording. The images are procecssed into 360 panoramas when the system is returned to the office. The system weighs 42 pounds.
Potential
With a partnership with Google heritage bodies have the ability of recording walkthoughs of important monuments easily with advanced digital technology which can be incorporated into the free Google Maps system; and as of December 2014 Historic Scotland have taken advantage of this showcasing 16 of their properties.
The system provides quality site tours of important cultural heritage which can be viewed by anyone using the Street View system.
Limitations
As the system is ground based only views from this angle will be recorded, meaning that informaion from other angles is lost.
The cameras are only 5-megapixel, which work well for the intended purpose of creating web accessible 360° panoramas, but limits their usefullness for other techniques such as photogrammetry.