Advertisement

Open software and standards in the realm of laser scanning technology

  • Francesco PirottiEmail author
Open Access
Review

Abstract

This review aims at introducing laser scanning technology and providing an overview of the contribution of open source projects for supporting the utilization and analysis of laser scanning data. Lidar technology is pushing to new frontiers in mapping and surveying topographic data. The open source community has supported this by providing libraries, standards, interfaces, modules all the way to full software. Such open solutions provide scientists and end-users valuable tools to access and work with lidar data, fostering new cutting-edge investigation and improvements of existing methods.

The first part of this work provides an introduction on laser scanning principles, with references for further reading. It is followed by sections respectively reporting on open standards and formats for lidar data, tools and finally web-based solutions for accessing lidar data. It is not intended to provide a thorough review of state of the art regarding lidar technology itself, but to provide an overview of the open source toolkits available to the community to access, visualize, edit and process point clouds. A range of open source features for lidar data access and analysis is provided, providing an overview of what can be done with alternatives to commercial end-to-end solutions. Data standards and formats are also discussed, showing what are the challenges for storing and accessing massive point clouds.

The desiderata are to provide scientists that have not yet worked with lidar data an overview of how this technology works and what open source tools can be a valid solution for their needs in analysing such data. Researchers that are already involved with lidar data will hopefully get ideas on integrating and improving their workflow through open source solutions.

Keywords

Laser scanning Lidar Big data Open data Libraries Application programming interface Web portals 

Introduction

Laser scanning technology brought new perspectives to 3D geospatial analysis, from close-range fixed scanners for modelling objects (e.g. statues in cultural heritage or industry installations for 3D CAD models), to satellites equipped with full-waveform laser digitizers such as NASA’s two IceSAT (Ice, Cloud, and Land Elevation Satellite) missions for ice volume estimation at the poles. Lidar - from light detection and ranging, by analogy with radar – is a measurement technology, usually associated to airborne applications [1]. There is no consensus on whether the term “lidar” should be capitalized or not; in this work the indications in [1] are followed, and the term is not capitalized. Laser scanning is a term sometimes used interchangeably with lidar, but more correctly indicating the process of scanning objects or the earth surface with such technology, producing multiple measurements. Lidar technology measures travel distance from emitters (source) to reflectors (targets) using time of flight (ToF), phase difference or triangulation. On-board detectors can discriminate light energy back-scattered from targets, and thus calculate distance using one of the three methods mentioned, with a defined accuracy range [2]. This technology is paired to navigation sensors, i.e. a global navigation satellite system (GNSS) receiver and an inertial navigation system (INS) which play a key role in providing respectively position and direction of the emitted pulse in a common coordinate reference frame [3, 4]. Returns from emitted pulses allow to map 3D location of all targets that totally or partially reflect the pulse energy. Partial reflection is possible because the laser has a certain “thickness” depending on the divergence angle of the beam. Multiple reflections result in multiple points that are recorded and stored in a file. The final raw product is called a point cloud. The fact that recent lidar sensors can discriminate multiple returns from targets that cause partial reflection implies that a single emitted pulse can hit more than one target and thus provide a partial degree of penetration of vegetation. This unique capability is a very important asset of lidar technology, as it allows to sample terrain under vegetation cover and also get information on vegetation structure.

Laser scanning technology

Laser scanners have different characteristics depending on their intended applications. Important differences are range, wavelength, precision, accuracy, pulse frequency, capabilities of discriminating single/multiple returns up to the full-waveform of the laser return signal. Sensors can measure close range - below a meter for modelling small objects - to long range - up to kilometres for surveying parts of the earth surface and bigger objects. Precision - the similarity of repeated measures - and accuracy - how close to the real value the measure is – of the laser range measurements vary. Precision and accuracy of range measurements, thus point positions, can range from sub-millimetric for close-range scanners to centimetric for aerial and satellite. This is a generic order of magnitude, not accounting for other, more significant, sources of error from components of the sensor e.g. GNSS and INS accuracies, incidence angle, lever arm and boresight misalignment, just to name some. Accuracy ranges of aerial lidar reach ±0.5 m depending on the above factors [5]. Verification of accuracy of the nadir-looking lidar sensor in the first ICESat mission has shown that data over Antarctica has an accuracy of ±0.14 m [6]. Papers on error budget [7, 8] give more details directly and in cited literature.

Lidar characteristics

A laser emits coherent light with a very small angle of divergence, thus allowing light to be very focused over a long distance. Depending on the angle of divergence and distance from emitted, the laser beam will have a certain thickness and is referred to, by some authors [9], as diffraction cone. The thickness implies a certain footprint size when projected on a surface. The footprint size ranges from a centimetres to meters, and impacts on penetration capabilities, number of returns, accuracy and precision [10].

The wavelength of the emitted laser beam is usually in the bright green spectral region (~ 532 nm) or in the near-infrared region (~ 1064 nm) of the energy spectrum [11]. The fact that the two mentioned wavelengths are multiples is due to the frequency doubling of the solid-state laser diode. Other wavelength values are possible by using fibre lasers. Bathymetric lidar does not use near-infrared lasers because this wavelength it gets almost completely absorbed in the first 10 cm of water [12]. As also evident in Fig. 1 (bottom) wavelength in the green part of the spectrum has a higher degree of water penetration. Bathymetric lidar can reach seafloor up to 50 m below sea level in very clear conditions, normally up to 12 m [15] considering water average turbidity. Multi-wavelength lidar sensors are now available, providing more spectral features and thus more information regarding the physical properties of target’s material [16]. Each wavelength has its pros and cons related to eye-safety, target reflectance (Fig. 1), background radiation and atmospheric transmission [12]. The energy of the laser beam will be reflected and absorbed by a surface; the absorption/reflectance ratio depends on the material of the surface that interacts with the pulse (Fig. 1), on the texture of the surface and on the incidence angle [11]. This return energy is recorded by the sensor and stored in the data as intensity values. Intensity supports classification of points and visual interpretation of the point cloud.
Fig. 1

Reflectance and absorption of light as a function of wavelength. (Top) reflectance of different materials – vertical red lines are at 532 nm and 1064 nm wavelengths of solid state lasers – seawater and ice have very low reflectance values thus overlap at this scale. Data are taken from the open ASTER spectral library [13]; (bottom) light absorption by water from various literature – open data collected from resources in Scott Prahl’s web page [14]

A sensor’s pulse repetition rate or pulse frequency is the number of laser pulses that can be emitted over time; the measure unit is hertz (Hz) i.e. pulses per second. Some sensor models allow tuning the pulse rate providing higher point density at same relative flight height and speed, but resulting in a lower energy per pulse [17]. Lower energy can diminishing the penetration capabilities of canopies, and is therefore taken into consideration when planning a survey, particularly over vegetation.

Canopy penetration by laser is possible due to gaps in canopy along the pulse direction allowing some pulses to reach the ground and back. It is worth noting that multi-return capability distinguishes lidar from dense image matching (DIM) in photogrammetric methods [18]. Both DIM and lidar deliver point clouds, but the latter will not provide points below the canopy surface. The two technologies can complement one another for improved modelling as reported in [19]. For more details there is a rich literature describing sensors and measurement principles [4, 11, 20, 21]. The capability of discriminating multiple returns – thus multiple targets - from a single emitted pulse is a key aspect. It allows to obtain accurate digital terrain models (DTMs) below vegetation as a result of returns from pulses penetrating canopies that can be used for specific further applications like in [22, 23]. It also provides information for extracting digital surface models (DSMs) and vertical vegetation structure.

Full-waveform

Some sensors digitize the full-waveform (FW), meaning that they record all of the reflected pulse energy as a function of time, by recording it a regular time intervals – commonly using 1–2 ns sized bins [9]. This allows more sophisticated post-processing methods [9, 24, 25, 26]. Processing FW data can be summarized in determining peaks and relative components (e.g. Gaussian components - width and amplitude – Fig. 2), to determine target position and other characteristics. For more details regarding mathematical approaches and methods [9, 24, 28, 29] provide a thorough overview. The main advantage of processing FW is the ability to apply more sophisticated algorithms to the data for detecting more targets that caused the reflection of the laser beam (Fig. 2) and more insight in the target’s characteristics, e.g. by analysis of the width of the FW peaks (Fig. 2). Lidar sensors that only provide discrete returns do so by processing the FW on-board during acquisition – e.g. using the Constant Fraction Discriminator (CFD) method [30, 31]. This method is faster, but less accurate than post-processing the FW. The disadvantages of dealing with a dataset with FW are trivial, i.e. a much larger dataset, as the sensor has to store all the FW for each emitted pulse, and more computing time for processing the data. In some scenarios it increases the number of detected targets consistently, in particular in areas with high vegetation, where 50% more targets are detected compared to discrete return data [24].
Fig. 2

Full-waveform digitizer return energy schema and plot (left) and binned representation of FW energy over forest canopy (right), taken from [27]

Platforms

Laser scanning technology is sometimes distinguished by carriers of the sensors. Statically positioned terrestrial platforms, commonly referred to as terrestrial laser scanners (TLS), emit laser beams around a spherical geometry with the sensor as the centre. Complex structures require multiple scan station and thus the TLS survey requires co-registration of each scan to a common reference frame. Laser sensors can be mounted on moving platforms on wheels e.g. cars and trucks, for mobile mapping. An operator can also be the mobile component and be equipped with a laser scanner either through a hand-held sensor or a special backpack allowing the operator to carry the sensor where the survey requires. Airborne carriers that can be equipped with laser sensors range from remotely piloted aircraft systems (RPAS) to aircrafts and satellite carriers, allowing surveys over a range of heights. Depending on platform and other characteristics, surveys can differ; Fig. 3 is an example showing a high-density point cloud that was created by a backpack laser scanner and a terrestrial laser scanner. Both surveys took roughly about half a day of surveying and half day of processing. The former provided denser and more complete coverage of the area, whereas the latter had a more accurate geo-positioning of points [33].
Fig. 3

backpack laser scanner survey (top) and TLS survey (bottom) of the same area (courtesy of [32])

Products and applications

The raw product of a lidar survey is a file with unstructured point data referred to as a point cloud, i.e., a number of points identified by three-dimensional coordinates and other attributes. Unstructured data implies that neighbours are not implicit like in matrix-like structures such as imagery, adding complexity to processing. Each point represents geometric and spectral characteristics of the surface that causes the reflections of the laser beam. The data are usually stored in formats that can be thought of as tables with columns storing coordinates and attributes – more detail given in section 2. Due to the pulse repetition rate, millions of points can be surveyed in a few minutes, thus creating large volumes of data that can easily reach billions of points. Processing such amounts of unstructured data required new paradigms to be investigated, and the open source community provided its share of solutions.

Processing the point cloud provides derived products that add informative value to data. In close-range applications a common product is a meshed 3D representation, i.e. a model, that can be further analysed visually [34], integrated in a virtual environment for augmented reality or materialized through 3D printers. Terrestrial laser scanners are often used in industrial and urban environments, with point clouds then converted to CAD models, or more sophisticated building information models (BIM) [35] for further analysis by engineering and architecture professionals for multiple applications [36, 37]. In aerial surveys important derived products are DTMs, DSMs and features extracted by various classification methods. Classification can be used to extract building geometries (roof-tops, full block or footprint), infrastructures (e.g. roads, power lines, poles, road signs), tree models and any object that can be discriminated from others using descriptors that can be fed in a classification algorithm [38]. Interpretation of objects in the point cloud leads to further analyses in numerous fields that can be broadly categorized in urban and cultural heritage applications, environmental forestry and agriculture, hazards and risk estimation, cartography, and geomorphology. Classification plays a key role in these applications, as object detection is of foremost importance for understanding urban and natural environment. Recent advances in classification use machine learning and deep learning approaches and are improving the success rate of classifiers also in the realm of point clouds [36, 38, 39, 40]. The final products provide semantics to points [41, 42], a further layer of information (Fig. 4).
Fig. 4

two lidar derived products - (left) 3D model of a cultural heritage relief from close-range lidar with segmented parts [34]; (right) aerial lidar survey in an urban context showing results from classification into facades and building roof-tops, from [36, 39]

The role of open source

The geospatial open source community has worked on various aspects of lidar technology. The development of software, the definition of open formats and the creation of solutions for accessing and processing point cloud data. Geographic information systems (GIS) operators normally work with vector or raster data in a 2D scenario (screen, printed maps), therefore tools to convert lidar data to such more approachable formats are important. Open algorithms and tools that allow such processing are available thanks to the effort of the OS community. In the following sections it is intended to provide an overview of the more commonly used solutions that are provided from the OS community. Data standards, file formats, libraries and application programming interfaces (API) all the way to complete software and point cloud data sharing solutions are reported and discussed. What is provided is, of course, a snapshot at the time of writing and some less visible OS projects have been left out; nevertheless, it is hoped that the overview of the role and importance of the OS effort is validly represented.

Motivated stakeholders have created different genres of social and technical aggregates dedicated to fostering collaborations in the realm of open source for geospatial applications, including laser scanning. The Open Geospatial Consortium has a dedicated Point Cloud Domain Working Group, which is actively working on organizing dedicated meetings for discussing encoding, optimization and standardization for managing, processing and sharing point cloud data. The Open Source Geospatial Foundation is another federation of peers that promotes collaboration in open geospatial technologies. It supports a number of projects and yearly conferences including an international conferences collecting academic and industrial profiles.

Data formats and open standards

Managing point clouds can be a challenging task if datasets become very large. When users go beyond working with a few small files, the choice of file formats and storage tools becomes crucial. Data can be stored in simple readable files, but can also be archived in dedicated hardware/software infrastructures with dedicated database management systems. Open solutions are available and are an option that can be considered.

File-based formats

A popular and simple way to exchange laser scanning data is via simple text (ASCII) files containing a table with at least XYZ coordinates; columns are defined via a separation character such as pipe, comma, space or tab. This obviously has three main drawbacks: it takes up a lot more disk space and only normal file compression can be applied.

A very common exchange format for lidar point cloud data is LAS (LASer) file format, an open standard maintained by the American Society for Photogrammetry and Remote Sensing (ASPRS) [43]. Storing raw point cloud data can be simplified conceptually as storing a table where each row is a point in the cloud and columns are the attributes that include 3D coordinates (X,Y,Z), intensity, return number, number of returns, classification and other attributes. The format is binary and stored in scan order before being further processed. Processing like merging scans can lose the scan order. LAS format comes in different versions: at time of writing from v.1.0 to v.1.4 [44].. LAS files can store only the information mentioned above, or in addition it can combine other information for each point, i.e. GPS time, red-green-blue colours (RGB), near infrared recorded intensity (NIR) and FW. The latest two versions (1.3 and 1.4) support storing FW data. The complexity of storing FW digitized data also led to the creation of specific formats with open-licenses such as PulseWaves [45] and the one used in the Sorted Pulse Data Library (SPDLib) [46, 47].

There are numerous other formats and standards that are used for specific applications or for optimizing access or storage space. The API web page of the Point Data Abstraction Library (PDAL) gives an overview of most existing formats [48] – see section “Libraries and APIs” for details on PDAL. Other formats are worth mentioning for their specific applications or common use. The entwine point tile (EPT) format is a “hierarchical octree-based point cloud format suitable for real-time rendering and lossless archival” [49] produced by the developers of Entwine OS library for massive point data archiving and retrieving. It organizes tiles of points using JSON for storing metadata and indexing information. The E57 format is implemented by ASTM International, formerly known as American Society for Testing and Materials, an international standards organization. E57 is not supported in PDAL at time of writing, but is convertible using the APIs provided by ASTM and is supported by a long list of software – see [50, 51] for more information. Other data formats have been developed and tested by developers for integration in software or for specific requirements. FUSION software, for example, requires to convert LAS files in its own native indexed binary format called LDA. The sorted pulse data (SPD) format was developed by [46] and is defined within a HDF5 file providing a pulse-based format natively supporting full-waveform data and spatial indexing.

Vendors of laser sensors also provide their own closed-source software that uses proprietary formats. In this case accessing and converting data to other formats can only be done either with the vendor’s software or with free readers/converters provided by vendors themselves, thus adding a degree of dependence and risking a data vendor lock-in.

Point cloud formats in relational databases

Relational databases have also implemented their own point cloud object types in order to support storing point clouds. Oracle® is mentioned here even if not an open-sourced format. Oracle has implemented the SDO_point cloud and SDO_point cloud_BLK object types in its Oracle Spatial extension to Oracle Enterprise Database. The pointcloud extension to PostgreSQL relational database implements the PCPATCH object types that organize spatially contiguous points in clusters (patches) stored in a row. The latter is an extension that is separate from the popular PostGIS spatial OS library, but can be integrated with the extension “postgis_pointcloud”. This helps indexing and accessing information.

Point clouds can easily take a large volume of computer memory and, when exceeding available random-access-memory, cannot be processed by loading all at once. For this reason, managing massive point clouds is a challenging task. A benchmark over a dataset ranging from a few million points to 640 billion points was undertaken in [52] using Oracle, Postgresql, MonetDB and file-based solutions (LAStools – see next section for detail on file-based indexing). The benchmark applied different tasks, such as loading data, subsetting points inside polygons and different types of queries over points. The final conclusions in [52] reported that best performance is very much dependant on the tasks required; Oracle Exadata, a commercial hardware/software solution, trivially gave best overall results. Postgresql with points organized in patches, as opposed to flat tables, was also tested, proving the former more efficient.

Compressed and indexed formats

Efforts have been made to improve archiving space and access time via formats and methods that support respectively compression and indexing. In the OS domain the LASZip library provides a lossless compressor of the LAS format that has its source-code in LGPL-license. Expected compression ratio can provide a file 12 times smaller than the original LAS file, with faster compressing time then other commercial solutions [53]. Compressed files are identified with the LAZ extension. Code for reading and writing LAZ files is open, and the LAZ standard is open as well, thus avoiding lock-in to a single vendor that might change terms of agreement or other policies [54].

Indexing means structuring data and providing rules for fast retrieval of parts of the data. It is an implicit tool in relational databases, and is used for point data storage in relational databases (see previous section). An indexing approach that is seen every-day and easy to understand is Google’s map-tiling at different scales for fast viewing of the earth surface via satellite or aerial imagery. Such index is based on a quad-tree approach. It can be expanded in the third dimension thinking of tiles becoming blocks, and points belonging to a specific block; such index is based on an oct-tree approach [55]. Oct-trees are often used for indexing laser scanning data and improving viewing of point clouds by selecting only points that fall within visible blocks in the oct-tree. The LASindex module is an open-licensed module in the LASTools package that provides adaptive quad-tree based indexing to LAS files over the x and y coordinates of all points. Quad-tree cells store a list of points falling into this cell using intervals of the point indices. The resulting file is identified with the LAX extension, and is usually about 0.01% of the LAS file size.

Software for processing laser scanning data

OS software has given significant contributions in providing tools and algorithms for viewing and processing lidar data. From full-fledged software for end-users to libraries and APIs for developers, the OS community has provided a range of tools for processing point clouds. In the following sections some of the more popular software and libraries are reported, describing their role and applications. The objective is to provide the reader with some references to practical tools that can be used and expanded by forking the source code or by building new modules on top of existing capabilities.

R-CRAN modules

R is a popular OS statistical tool with extensive capabilities from contributed modules. Analysis of remote sensing is available through dedicated modules that can link to other algorithms available in R like machine learning for classification [56]. Looking through all of R’s modules, nine out of 13,755 packages contain the word “lidar” or “laser”. Seven are dedicated to vegetation or forestry analysis, the others, RCSF and viewshed3d packages respectively filter ground points using the Cloth Simulation algorithm from [57] and extract 3D viewshed from TLS data.

rLidar package has the objective of “providing and testing a new framework for imputing individual tree attributes from field and lidar data in longleaf pine forests” [58, 59]. It is dedicated to forestry applications and allows to detect individual trees, define their canopy volume and extract a 3D stand model (see Fig. 5).
Fig. 5

Lidar forestry applications, from point cloud to stand model (courtesy of [60]). From left to right a 3D representation of points with colour scaled height above ground, 2D representation with tree stem position inferred from point cloud (black points) and 3D tree vector model derived from points for a virtual forest

LidR is a module also dedicated to lidar forestry applications [61, 62]. Latest version to the time of writing (2.1) provides methods for reading LAS and LAZ formats, for ground classification, tree segmentation and extraction of descriptive metrics for further analyses. The focus of LidR is the application of fast and efficient algorithms for tree detection and segmentation. Processing multiple files is supported through launching processes on a catalogue that manages multiple datasets. Multi-core processing is supported, allowing to use available computer processors. Another useful feature of LidR is an embedded spatial metric calculator, i.e. a user-defined function can be applied to points clustered in a user-defined voxel-space.

LAStools

LAStools is a “hybrid” software, in the sense that some modules are fully open source, others are free with point limitation (1–15 million points depending on the module); if the point limitation is exceeded, random noise is added to the product, e.g. points are shifted in space randomly, images get some pixels turned to black [63]. LAStools is modular in the sense that each tool is available as a standalone executable file. A Graphical User Interface (GUI) is available to call the modules over a single lidar file or a list of lidar files. Open source modules include tools and libraries for reading and writing all flavours of LAS versions, and lidar file formats including ASCII text files, along with compression, indexing (see section 2.3) and other features for working with laser scanning data. LAStools development has contributed to open standards (LAS, PulseWaves) and other derived open solutions such as the LAZ compressed format [64]. The modularity of the software allows to easily script workflows that require a lot of steps. Integration with other OS and commercial software and libraries provides more functionalities. For example, modules can be called from ArcGIS and QGIS panels, thanks to plugins that bridge these with LAStools. Another example is the useful integration with Potree, a web viewer further discussed in next sections. LAS files can be easily published in a web portal with the laspublish module, that “creates a LiDAR portal for 3D visualization (and optionally also for downloading) of LAS and LAZ files in any modern Web browser using the WebGL Potree from [65]”.

Grass GIS

GRASS GIS has long been a popular OS tool among researchers using geospatial data [66]. Importing lidar data in LAS and text format is possible in GRASS using the following importers: v.in.ascii, v.in.lidar and r.in.lidar, r.in.xyz, that allow to import points into vector and raster formats respectively. GRASS provides three modules for filtering lidar data and that are to be used in succession: (i) v.lidar.edgedetection – first step for detecting object edges, using a point cloud as input and assigning to each point one of three classes, edge, terrain or unknown; (ii) v.lidar.growing – second step that identifies points inside edges and defines object and terrain with single and double return (iii) v.lidar.correction – this last step can be run several times, it uses a bilinear spline interpolation to create a surface, and then uses values of residuals (distance of points to surface) to define if they belong to the terrain or not, depending on a regularization parameter that is set by the user. These step allow point classification for separation of buildings, vegetation and ground plane [67, 68]. Bare earth filtering from laser scanning point clouds is a main investigation topic due to the importance of DTMs for further analyses; it is available as in GRASS [67], and comparison of methods is presented in [69].

CloudCompare

CloudCompare [70] is a point cloud and mesh viewer with many editing and processing modules. It reads a vast number of formats and allows to add attributes (referred to as “scalars” in this software) created from processing outputs of the point cloud. CloudCompare was initially created to identify significant changes between a pair of point clouds, but has now been enriched with several other features. For example, CloudCompare provides robust co-registration, i.e. aligns point clouds in a common coordinate reference frame, using the iterative closest point (ICP) algorithm on overlapping parts of a point cloud. Recent alpha version 2.11 (at time of writing), provide means to compute geometric features (Fig. 6). Geometric features provide information related to a point and its neighbours, thus shape-related descriptors, allowing improved point classification through machine learning and deep learning [38, 39, 40, 71]. Points can be also meshed, meaning that surfaces can be created though Delaunay triangulation (minimum circumcircle criterion [72]). Distances between two point clouds and points to a meshed surface can be calculated with CloudCompare; for example distance of points above ground from the meshed ground surface can provide information on heights of buildings, trees, poles etc. All modules are accessed through a GUI that allows a high level of interaction between tools, point clouds and meshed surfaces.
Fig. 6

CloudCompare GUI (right) and panel for calculation of geometric features for each point, using a local neighbourhood (left)

Fusion

Fusion is a software providing well documented tools for processing lidar mainly for forestry applications. An extensive manual with examples [73] and tutorials is available, documenting features and providing examples. Fusion provides a GUI with two options, for viewing data in 2D and 3D. Processing is done through the command line. It implements the ability to filter ground points, extract metrics over a grid and limit calculations over field plots [73]. Grid metrics are statistics over point clusters determined by dividing the point cloud into a grid of user-defined resolution. For example height and intensity distributions within each grid cell and within the limits of a field plot support adding features for predicting forest attributes: crown biomass, stem biomass and aboveground biomass [74].

Others

Projects that start as theses or PhD investigations in academic environments sometimes step up and provide open source code that greatly benefit the community. 3DTK - The 3D Toolkit, is an example that has provided releases until 2017 of routines including an interface. It features 6D simultaneous localization and mapping (SLAM) and shape detection methods for extraction of geometries (e.g. planes) [75, 76]. 3DForest was started in the Department of Geoinformation Technologies of Mendel University in Brno as a PhD topic and is dedicated to analysis of high-density lidar data – i.e. from TLS and drone lidar – for forest applications [77]. A complete list of OS projects is not in the scope of this work. Projects have a short or long lifespan depending on support and participation of authors. New projects will start in the future and all effort in this sense will bring new possibilities in the realm of lidar data processing and management.

Full-waveform processing tools

Full-waveform lidar data requires specific software for dealing with processing the waveform, which is energy intensity as a function of time. There are not many OS software for processing FW, as it is not yet a commonly used format. It is worth mentioning PulseWaves/PulseTools [45], and FullAnalyze [78]. The former provides tools to process the PulseWaves format, the latter provides analysis tools to extract targets from FW. SPD is a file format [46], but also an OS software [47] for reading/writing and decomposing waveforms. Decomposing means extracting peaks from the waveform, which are caused by surfaces reflecting the laser beam. DASOS [79] is a software developed for aligning FW data and hyperspectral imagery for classification purposes [80, 81]. It features three main processing steps: (i) creating 3D meshes, (ii) calculating 2D metrics from both lidar and hyperspectral imagery, (iii) creating feature vectors for further classification.

Remote access of laser scanning data

Web viewers

Sharing lidar data via web portals is a logical necessity in an interconnected world. Internet speed, will increase over time as technology advances, i.e. in some years broadband speeds are expected to nearly double [82]. Lidar data visual analysis and limited editing can be done through the web, leveraging server-based services, many of which are distributed as OS. WebGL support in browsers allows to view large amounts of point data, and many solutions take advantage of WebGL frameworks with open licences to provide users with tools for viewing and processing large point clouds. The advantage is trivial: to share data and tools to the public without the need from clients to install software and transfer huge amounts of data in advance.

The ability of streaming large point clouds over the web is very important in view of increasing availability of massive datasets. Potree, developed by [65], provides a solution and viewing capabilities of very large point clouds using Javascript through nodejs®. It requires to convert point clouds into a custom file format that leverages octree indices for fast visualization. Structuring and indexing point data for visualization purposes is a necessary step, as common formats for laser scanning data are not optimized for this task. A notable feature of Potree is segmented progressive streaming over time, i.e. points are loaded over time depending of level of detail and density of already loaded points. Plasio is an OS project for providing users a WebGL-based viewer of LAS files loading them through drag-n-drop or streaming from a server (Greyhound – see next section) [83]. It is trivial to say that users uploading their own data are limited by a bottleneck related to upload velocity of the net, but future web technologies will solve this. NASA World Wind is an open source virtual 3D globe [84] that supports viewing, filtering and basic editing capabilities of lidar data thanks to an ad hoc extension that loads LAS files [85].

Web services

Many end-users of geospatial data are accustomed to accessing raster and vector data through web services, i.e. Web Mapping / Coverage / Feature Services, (WMS, WCS, WFS) respectively for raster and vector data. These standards have been pioneered by the Open Geospatial Consortium (OGC) and simplify data management down to attaching an URL address to access data. As of now the only OS framework for servicing point clouds is Greyhound, which is a dynamic point cloud server architecture that performs progressive level-of-detail streaming of indexed resources on-demand. It is integrated with the Entwine OS library. Entwine organizes point cloud data and prepares it for streaming. Data available as a service can then be fed to the web viewers that are mentioned in the previous section.

Open data portals

The amount of lidar data has grown exponentially as the number of sensors and usage increased. Open software, data and standards is a paradigm that, upon other things, strives to share data between users. Earth observation satellite imagery, cartographic data and a plethora of spatial data are now easily found in open data portals. Sharing laser scanning data through open portals is more challenging than sharing typical vector or raster spatial formats for the following reasons: (i) data volume is high, as a survey is typically millions to billions of points, (ii) data is unstructured and a dataset often includes more than one data-file, (iii) remote visualization requires indexing and network speed allows to stream only part of the data in the field of view (iv) point cloud data formats have been tested less than raster or vector formats. Web viewers and web services (see previous section) support open data portals in visualization and streaming of the data, applying different solutions. Sources of open lidar data that can be found on the web in the OpenTopography portal [86], the USGS’s Earth Explorer [87] and open data library PANGAEA.

National datasets are being shared by some countries that decide to go open. Wall-to-wall lidar surveys are available in a few of these countries. Some examples that provide web access with open creative commons license CC BY 4.0 unless otherwise noted are The Netherlands (CC0 1.0) [88], Finland [89], Latvia (CC0 1.0) [90], Spain [91], England (with Open Government License) [92], Sweden (CC0 1.0) [93]. Other countries have partly covered their boundaries with lidar surveys, depending on single regions or areas of interest. Italy for example has all the coast and areas of high hydro-geological risk covered with lidar data, which are under CC BY 3.0 license [94], but cannot be directly downloaded. A request with tile identifiers must be provided via email and a small processing fee applies (see [95] for details). Mixed cases with different open/closed licenses and hard/easy access to data are present in other countries.

Libraries and APIs

Libraries play a key role in supporting developers in implementing new methods for processing lidar data, without rewriting drivers for reading and writing data or other algorithms that have been implemented and shared through open licenses.

Reading and writing existing lidar data formats is the first essential step for further development. The Point Data Abstraction Library – PDAL [48], mirrors the popular GDAL/OGR library for what concerns point data, i.e. providing read/write access to point cloud formats. It replaces the LibLAS library [96] that was oriented uniquely to reading/writing different LAS versions.

The point cloud library (PCL) is a large scale open project providing an API with many algorithms [97, 98]. It provides functions for numerous basic tasks like reading/writing, tree indexing, filtering outliers and calculating normals. Advanced functions are provided, for recognition, registration, segmentation and for extracting different types of point descriptors that can be used for classification (e.g. Rotational Projection Statistics - RPS, Fast Point Feature Histograms – FPHF, Globally Aligned Spatial Distribution – GASD and many others).

Conclusions

What reported in the previous sections shows how the effort of the OS community over providing solutions in terms of standards, software and libraries in the realm of lidar technology has been and still is significant. It is worth mentioning also the advantages that open tools provide to education. Free and accessible means for visualization, processing, easy access to point cloud data and, most importantly, the base for further development for other researchers that want to build functionalities on top of existing solutions [32, 33]. Lidar technology is ongoing active development in terms of new sensors and new carriers with increased capabilities [99]. Single-photon counting breakthrough technology provides sensors capable of providing several orders of magnitude higher pulse densities. A recent photon counting sensor that will provide precious open data to the scientific community is the ATLAS lidar in the ICESat-2 mission deployed by NASA in September 2018. Multi-wavelength lidar sensors provide multi-spectral information for each point. Remotely piloted systems, e.g. drones and rovers, will be equipped with laser sensors. Simultaneous localization and mapping (SLAM) algorithms provide automatic registration of clouds from a moving sensor without requiring position information from GNSS. A lot of research is also done over solid-state lidar, that can potentially lower costs, increasing distribution of lidar-equipped carriers. With this amount of new developments in the lidar sensor realm, open science can provide important contributions in data fusion of lidar and optical information [18, 19, 100]. Another main challenge for the future will be to provide optimized tools for processing and mining information from very large volumes of point data. Extracting information from lidar is key, and it requires tools to convert a point cloud to GIS-enabled features for further processing in decision support in a vast range of applications [101]. Data mining for lidar datasets is improving thanks to new catalogues [102] and collaborative strategies for sharing open data through portals and services [103]. Fostering collaboration of scientists and developers that provide and/or use open products in their work will be of added value to lidar science.

Notes

Acknowledgements

Not applicable.

Author’s contributions

The main author is the only contributor. The author read and approved the final manuscript.

Funding

No funding provided for this article.

Competing interests

The author declares that he has no competing interests.

References

  1. 1.
    Granshaw SI. Photogrammetric terminology: third edition. Photogramm Rec. 2016;31:210–52.CrossRefGoogle Scholar
  2. 2.
    Vosselman G, Maas H-G. Airborne and terrestrial laser scanning. United Kingdom: CRC Press; 2010.Google Scholar
  3. 3.
    Skaloud J, Schaer P, Stebler Y, Tomé P. Real-time registration of airborne laser data with sub-decimeter accuracy. ISPRS J Photogramm Remote Sens. 2010;65:208–17 Available from: http://linkinghub.elsevier.com/retrieve/pii/S0924271609001580. [cited 2014 Jun 16].CrossRefGoogle Scholar
  4. 4.
    Petrie G, Toth C. Introduction to Laser Ranging, Profiling, and Scanning. Topogr Laser Ranging Scanning. CRC Press. 2008:1–28.  https://doi.org/10.1201/9781420051438.ch1.Google Scholar
  5. 5.
    May NC, Toth CK. Point positioning accuracy of airborne LIDAR systems: a rigorous Analysis. Photogramm image anal 2007 (PIA07). Int Arch Photogramm Remote Sens Spat Inf Sci. 2007;XXXVI:107–211.Google Scholar
  6. 6.
    Shuman CA, Zwally HJ, Schutz BE, Brenner AC, DiMarzio JP, Suchdeo VP, et al. ICESat Antarctic elevation data: preliminary precision and accuracy assessment. Geophys Res Lett. 2006;33:L07501.Google Scholar
  7. 7.
    Boehler W, Vicent MB, Marbs A. Investigating Laser Scanner Accuracy. Int Arch Photogramm Remote Sens Spat Inf Sci. 2003. p. 1–10.Google Scholar
  8. 8.
    Habib A, Bang KI, Kersting AP, Lee D-C. Error budget of Lidar systems and quality control of the derived data. Photogramm Eng Remote Sens. 2009;75:1093–108.CrossRefGoogle Scholar
  9. 9.
    Mallet C, Bretar F. Full-waveform topographic lidar: State-of-the-art. ISPRS J Photogramm Remote Sens. 2009;64:1–16 Available from: http://www.sciencedirect.com/science/article/pii/S0924271608000993. [cited 2014 Mar 21].CrossRefGoogle Scholar
  10. 10.
    Bin X, Fangfei L, Keshu Z, Zongjian L. Laser footprint size and pointing precision analysis for LiDAR systems. Int Arch Photogramm Remote Sens Spat Inf Sci. 2008;XXXVII:331–336.Google Scholar
  11. 11.
    Wehr A, Lohr U. Airborne laser scanning - an introduction and overview. ISPRS J Photogramm Remote Sens. 1999;54:68–82.CrossRefGoogle Scholar
  12. 12.
    Pfennigbauer M, Ullrich A. Multi-Wavelength Airborne Laser Scanning. Int LiDAR Mapp Forum. 2011. p. 1–10.Google Scholar
  13. 13.
    Baldridge AM, Hook SJ, Grove CI, Rivera G. The ASTER spectral library version 2.0. Remote Sens Environ. 2009;113:711–5 Available from: https://linkinghub.elsevier.com/retrieve/pii/S0034425708003441.CrossRefGoogle Scholar
  14. 14.
    Prahl J. Optical absorption of water compendium. Oregon Med Laser Cent. 1998; Available from: http://omlc.ogi.edu/spectra/water/abs/index.html. [cited 2019 Sep 15].
  15. 15.
    Wang C, Li Q, Liu Y, Wu G, Liu P, Ding X. A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry. ISPRS J Photogramm Remote Sens. 2015;101:22–35.CrossRefGoogle Scholar
  16. 16.
    Teo T-A, Wu H-M. Analysis of Land Cover Classification Using Multi-Wavelength LiDAR System. Appl Sci. 2017;7:663 Available from: http://www.mdpi.com/2076-3417/7/7/663.CrossRefGoogle Scholar
  17. 17.
    Lee C-C, Wang C-K. Effect of flying altitude and pulse repetition frequency on laser scanner penetration rate for digital elevation model generation in a tropical forest. GIScience Remote Sens. 2018;55:817–38 Available from: https://www.tandfonline.com/doi/full/10.1080/15481603.2018.1457131. [cited 2018 Dec 8].CrossRefGoogle Scholar
  18. 18.
    Martinez-Rubi O, Nex F, Pierrot-Deseilligny M, Rupnik E. Improving FOSS photogrammetric workflows for processing large image datasets. Open Geospatial Data, Softw Stand. 2017;2:12 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0024-5.CrossRefGoogle Scholar
  19. 19.
    Mandlburger G, Wenzel K, Spitzer A, Haala N, Glira P, Pfeifer N. Improved topographic models via concurrent airborne Lidar and dense image matching. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2017;4:259–66.CrossRefGoogle Scholar
  20. 20.
    Thiel K, Wehr A. Performance Capabilities of Laser-Scanners-An Overview and Measurement Principle Analysis. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2004;36:14–8 Available from: http://www.nav.uni-stuttgart.de/navigation/publikationen/fachartikel/2004/Performance_capabilities.pdf.Google Scholar
  21. 21.
    Oude Elberink S, Velizhev A, Lindenbergh L, Kaasalainen S, FP. Preface. Int Arch Photogramm Remote Sens Spat Inf Sci. 2015;XL-3/W3:1 Available from: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-3-W3/1/2015/.CrossRefGoogle Scholar
  22. 22.
    Petrasova A, Mitasova H, Petras V, Jeziorska J. Fusion of high-resolution DEMs for water flow modeling. Open Geospatial Data, Softw Stand. 2017;2:6 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0019-2.CrossRefGoogle Scholar
  23. 23.
    Petras V, Newcomb DJ, Mitasova H. Generalized 3D fragmentation index derived from lidar point clouds. Open Geospatial Data Softw Stand. 2017;2:9 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0021-8.CrossRefGoogle Scholar
  24. 24.
    Chauve A, Mallet C, Bretar F, Durrieu S, Deseilligny MP, Puech W. Processing full-waveform Lidar data: Modelling raw signals. Int Arch Photogramm Remote Sens Spat Inf Sci. 2007;XXXVI:102–107.Google Scholar
  25. 25.
    Guo L, Chehata N, Mallet C, Boukir S. Relevance of airborne lidar and multispectral image data for urban scene classification using random forests. ISPRS J Photogramm Remote Sens. 2011;66:56–66 Available from: https://linkinghub.elsevier.com/retrieve/pii/S0924271610000705.CrossRefGoogle Scholar
  26. 26.
    Fieber KD, Davenport IJ, Ferryman JM, Gurney RJ, Walker JP, Hacker JM. Analysis of full-waveform LiDAR data for classification of an orange orchard scene. ISPRS J Photogramm Remote Sens. 2013;82:63–82 Available from: https://linkinghub.elsevier.com/retrieve/pii/S0924271613001238. [cited 2014 Mar 29].CrossRefGoogle Scholar
  27. 27.
    Pirotti F, Guarnieri A, Vettore A. Analysis of correlation between full-waveform metrics, scan geometry and land-cover: an application over forests. ISPRS Ann Photogramm Remote Sens Spat Inf Sci M. Scaioni, R. C. Lindenbergh, S. Oude Elberink, D. Schneider, F. Pirotti; 2013;II-5/W2:235–40. Available from: http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-5-W2/235/2013/isprsannals-II-5-W2-235-2013.pdf. [cited 2014 Mar 31]
  28. 28.
    Wagner W. Radiometric calibration of small-footprint full-waveform airborne laser scanner measurements: Basic physical concepts. ISPRS J Photogramm Remote Sens. 2010;65:505–13 Available from: http://www.sciencedirect.com/science/article/pii/S0924271610000626. [cited 2014 Mar 26].CrossRefGoogle Scholar
  29. 29.
    Wagner W, Ullrich A, Ducic V, Melzer T, Studnicka N. Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner. ISPRS J Photogramm Remote Sens. 2006;60:100–12 Available from: http://linkinghub.elsevier.com/retrieve/pii/S0924271605001024. [cited 2014 Mar 25].CrossRefGoogle Scholar
  30. 30.
    Hetherington D. Topographic Laser Ranging and Scanning: Principles and Processing , edited by J. Shan and C. K. Toth, Taylor and Francis, Boca Raton, USA, 2009, xi + 574 pp., £82, hardback (ISBN 13: 978–1–4200-5142-1). Int J Remote Sens. 2010;31:3333–4 Available from: https://www.tandfonline.com/doi/full/10.1080/01431160903112612.CrossRefGoogle Scholar
  31. 31.
    Stilla U, Jutzi B. Waveform Analysis for small-footprint pulsed laser systems. In: Shan J, Toth CK, editors. Topogr laser ranging scanning Princ process. Boca Raton: CRC Press; 2009. p. 215–34.Google Scholar
  32. 32.
    Rutzinger M, Höfle B, Lindenbergh R, Oude Elberink S, Pirotti F, Sailer R, et al. Close-Range Sensing Techniques in Alpine Terrain. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2016;III–6:15–22 Available from: http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/III-6/15/2016/.CrossRefGoogle Scholar
  33. 33.
    Rutzinger M, Bremer M, Höfle B, Hämmerle M, Lindenbergh R, Oude Elberink S, et al. Training in Innovative Technologies for Close-Range Sensing in Alpine Terrain. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2018;IV(2):239–46 Available from: https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2/239/2018/.CrossRefGoogle Scholar
  34. 34.
    Pirotti F, Guarnieri A, Vettore A. An open source application for interactive exploration of cultural heritage 3D models on the web. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2009; Available from: https://www.isprs.org/proceedings/XXXVIII/5-W1/pdf/guarnieri_etal.pdf. Accessed 02 Dec 2018.
  35. 35.
    Barazzetti L, Banfi F, Brumana R, Gusmeroli G, Previtali M, Schiantarelli G. Cloud-to-BIM-to-FEM: structural simulation with accurate historic BIM from laser scans. Simul Model Pract Theory. 2015;57:71–87.CrossRefGoogle Scholar
  36. 36.
    Pirotti F, Zanchetta C, Previtali M, Della Torre S. Detection of building roofs and facades from aerial laser scanning data using deep learning. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2019;XLII-2/W11:975–80.CrossRefGoogle Scholar
  37. 37.
    Previtali M, Barazzetti L, Brumana R, Cuca B, Oreni D, Roncoroni F, et al. Automatic façade modelling using point cloud data for energy-efficient retrofitting. Appl Geomatics. 2014;6:95–113.CrossRefGoogle Scholar
  38. 38.
    Weinmann M, Jutzi B, Hinz S, Mallet C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J Photogramm Remote Sens. 2015;105:286–304 Available from: https://linkinghub.elsevier.com/retrieve/pii/S0924271615000349.CrossRefGoogle Scholar
  39. 39.
    Pirotti F, Tonion F. Classification of Aerial Laser Scanning Point Clouds Using Machine Learning: A Comparison Between Random Forest and Tensorflow. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2019;XLII-2/W13:1105–11 Available from: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W13/1105/2019/.CrossRefGoogle Scholar
  40. 40.
    Weinmann M, Weinmann M, Mallet C, Brédif M. A Classification-Segmentation Framework for the Detection of Individual Trees in Dense MMS Point Cloud Data Acquired in Urban Areas. Remote Sens. 2017;9:277 Available from: http://www.mdpi.com/2072-4292/9/3/277.CrossRefGoogle Scholar
  41. 41.
    Vosselman G. Point cloud segmentation for urban scene classification. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2013;XL-7/W2:257–262.CrossRefGoogle Scholar
  42. 42.
    Oude Elberink S, Vosselman G. Quality analysis on 3D building models reconstructed from airborne laser scanning data. ISPRS J Photogramm Remote Sens. 2011;66:157–165.CrossRefGoogle Scholar
  43. 43.
    ASPRS. LASer (LAS) File Format Exchange Activities – ASPRS. Available from: https://www.asprs.org/divisions-committees/lidar-division/laser-las-file-format-exchange-activities. [cited 2019 Feb 13]
  44. 44.
    The Imaging and Geospatial Information Society, ASPRS The American Society for Photogrammetry & Remote Sensing. LAS Specification Version 1.4-R13. Bethesda; 2013.Google Scholar
  45. 45.
    Isenburg M. Recent advances in full waveform LiDAR aquisition and processing via the pulsewaves format and API. ACRS 2015 - 36th Asian Conf Remote Sens Foster Resilient Growth Asia, Proc. 2015.Google Scholar
  46. 46.
    Bunting P, Armston J, Lucas RM, Clewley D. Sorted pulse data (SPD) library. Part I: A generic file format for LiDAR data from pulsed laser systems in terrestrial environments. Comput Geosci. 2013;56:197–206 Available from: https://www.sciencedirect.com/science/article/pii/S0098300413000332?via%3Dihub. [cited 2019 Feb 10].CrossRefGoogle Scholar
  47. 47.
    Bunting P, Armston J, Clewley D, Lucas RM. Sorted pulse data (SPD) library—Part II: A processing framework for LiDAR data from pulsed laser systems in terrestrial environments. Comput Geosci. 2013;56:207–15 Available from: https://www.sciencedirect.com/science/article/pii/S0098300413000241. [cited 2019 Feb 10].CrossRefGoogle Scholar
  48. 48.
    PDAL - Point Data Abstraction Library — pdal.io. Available from: https://pdal.io/index.html. [cited 2019 Feb 11]
  49. 49.
    Manning C. Entwine — entwine.io. Available from: https://entwine.io/. [cited 2019 Feb 12]
  50. 50.
    E57.04 3D Imaging System File Format Committee. libE57: Software Tools for Managing E57 files (ASTM E2807 standard). Available from: http://www.libe57.org/. [cited 2019 Feb 25]
  51. 51.
    Huber D. The ASTM E57 file format for 3D imaging data exchange. In: Beraldin JA, Cheok GS, MB MC, Neuschaefer-Rube U, Baskurt AM, IE MD, et al., editors. Three-Dimensional Imaging, Interact Meas; 2011. p. 78640A. Available from: http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.876555.CrossRefGoogle Scholar
  52. 52.
    van Oosterom P, Martinez-Rubi O, Ivanova M, Horhammer M, Geringer D, Ravada S, et al. Massive point cloud data management: design, implementation and execution of a point cloud benchmark. Comput Graph. 2015;49:92–125.CrossRefGoogle Scholar
  53. 53.
    Isenburg M. LASzip: lossless compression of LiDAR data. Photogramm Eng Remote Sensing Am Soc Photogrammetry. 2013;79:209–17.CrossRefGoogle Scholar
  54. 54.
    McKenna J, Smith M, Čepický J, Cannata M, Craciunescu V, van den Eijnden B, et al. LIDAR Format Letter - OSGeo. 2015. Available from: http://wiki.osgeo.org/wiki/LIDAR_Format_Letter. [cited 2018 Dec 4]
  55. 55.
    Samet H. Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann; 2006. Available from: http://www.ncbi.nlm.nih.gov/pubmed/4870550.Google Scholar
  56. 56.
    Piragnolo M, Masiero A, Pirotti F. Open source R for applying machine learning to RPAS remote sensing images. Open Geospatial Data, Softw Stand. 2017;2:16 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0033-4.CrossRefGoogle Scholar
  57. 57.
    Zhang W, Qi J, Wan P, Wang H, Xie D, Wang X, et al. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 2016;8:501.CrossRefGoogle Scholar
  58. 58.
    Silva CA, Crookston NL, Hudak AT, Vierling LA. rLiDAR: An R package for reading, processing and visualizing lidar (Light Detection and Ranging) data. Available from: http://cran.r-roject.org/web/packages/rLiDAR/index.html. Accessed 02 Dec 2018.
  59. 59.
    Silva CA, Hudak AT, Vierling LA, Loudermilk EL, O’Brien JJ, Hiers JK, et al. Imputation of Individual Longleaf Pine (Pinus palustris Mill.) Tree Attributes from Field and LiDAR Data. Can J Remote Sens. 2016;42:554–73.CrossRefGoogle Scholar
  60. 60.
    Silva CA, Klauberg C, Mohan MM, Bright BC. LiDAR Analysis in R and rLiDAR for forestry applications LiDAR remote sensing of Forest resources view project. 2018. Available from: https://www.researchgate.net/publication/324437694 Google Scholar
  61. 61.
    Pirotti F, Kobal M, Roussel JR. A Comparison of Tree Segmentation Methods Using Very High Density Airborne Laser Scanner Data. Int Arch Photogramm Remote Sens Spat Inf Sci. 2017;XLII-2/W7:285–90 Available from: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W7/285/2017/.CrossRefGoogle Scholar
  62. 62.
    Marchi N, Pirotti F, Lingua E. Airborne and Terrestrial Laser Scanning Data for the Assessment of Standing and Lying Deadwood: Current Situation and New Perspectives. Remote Sens. Multidisciplinary Digital Publishing Institute; 2018;10:1356. Available from: http://www.mdpi.com/2072-4292/10/9/1356. [cited 2018 Aug 31]
  63. 63.
    Isenburg M. rapidlasso GmbH | fast tools to catch reality. Available from: https://rapidlasso.com/. [cited 2019 Feb 14]
  64. 64.
    Hug C, Krzystek P, Fuchs W. Advanced Lidar data processing with Lastools. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2004;XXXV:1–6.Google Scholar
  65. 65.
    Schuetz M. Potree: rendering large point clouds in web browsers: Vienna University of Technology; 2016. Available from: https://publik.tuwien.ac.at/files/publik_252607.pdf. Accessed 02 Dec 2018.
  66. 66.
    Neteler M, Bowman MH, Landa M, Metz M. GRASS GIS: A multi-purpose open source GIS. Environ Model Softw. 2012;31:124–30 Available from: http://linkinghub.elsevier.com/retrieve/pii/S1364815211002775. [cited 2014 May 23].CrossRefGoogle Scholar
  67. 67.
    Brovelli MA, Longoni UM, Cannata M. LIDAR data filtering and DTM interpolation within GRASS. Trans GIS. 2004;8:155–74.CrossRefGoogle Scholar
  68. 68.
    Brovelli MA, Cannata M, Longoni UM. DTM LIDAR in area urbana. Boll della Soc Ital di Topogr e Fotogramm. 2002;2:7–26.Google Scholar
  69. 69.
    Sithole G, Vosselman G. Experimental comparison of filter algorithms for bare-earth extraction from airborne laser scanning point clouds. ISPRS J Photogramm Remote Sens. 2004;59:85–101.CrossRefGoogle Scholar
  70. 70.
    Girardeau-Montaut D. CloudCompare (version 2.9) [GPL software]. 2017. Available from: http://www.cloudcompare.org/. [cited 2017 Jan 1]Google Scholar
  71. 71.
    Qi CR, Su H, Kaichun M, Guibas LJ. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. 2017 IEEE Conf Comput Vis pattern Recognit. IEEE. 2017:77–85 Available from: http://ieeexplore.ieee.org/document/8099499/. Accessed 02 Dec 2018.
  72. 72.
    Guibas L, Stolfi J. Primitives for the manipulation of general subdivisions and the computation of Voronoi. ACM Trans Graph. 1985;4:74–123 Available from: http://portal.acm.org/citation.cfm?doid=282918.282923.CrossRefGoogle Scholar
  73. 73.
    Mcgaughey RJM. FUSION/LDV: Software for LIDAR Data Analysis and Visualization. United States Dep Agric For Serv Pacific Northwest Res Stn. 2018; Available from: http://forsys.cfr.washington.edu/Software/FUSION/FUSION_manual.pdf. [cited 2018 Dec 9]
  74. 74.
    García-Gutiérrez J, Martínez-Álvarez F, Troncoso A, Riquelme JC. A comparison of machine learning regression techniques for LiDAR-derived estimation of forest variables. Neurocomputing. 2015;167:24–31 Available from: https://www.sciencedirect.com/science/article/pii/S0925231215005524#bib27. [cited 2019 Sep 16].CrossRefGoogle Scholar
  75. 75.
    Nuechter A. 3DTK - The 3D Toolkit. 2019. Available from: http://slam6d.sourceforge.net/index.html. [cited 2019 Sep 20]Google Scholar
  76. 76.
    Elseberg J, Borrmann D, Nuchter A. Efficient processing of large 3D point clouds. 2011 XXIII Int Symp Information, Commun Autom Technol. IEEE; 2011. p. 1–7. Available from: https://ieeexplore.ieee.org/document/6102102/. Accessed 02 Dec 2018.
  77. 77.
    Trochta J, Krůček M, Vrška T. Král K. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR. Yang J, editor. PLoS One. 2017;12:e0176871 Available from: https://dx.plos.org/10.1371/journal.pone.0176871. [cited 2019 Sep 20].CrossRefGoogle Scholar
  78. 78.
    Chauve A, Bretar F, Durrieu S, Pierrot-Deseilligny M, Puech W. FullAnalyze: A Research tool for handling, processing and analyzing full-waveform lidar data. 2009 IEEE Int Geosci Remote Sens Symp. IEEE; 2009. p. IV-841-IV–844.Google Scholar
  79. 79.
    Miltiadou M. Art-n-MathS/DASOS. 2019. Available from: https://github.com/Art-n-MathS/DASOS/. [cited 2019 Sep 16]Google Scholar
  80. 80.
    Miltiadou M, Grant M, Campbell ND, Warren M, Clewley D, Hadjimitsis D. Open source software DASOS: efficient accumulation, analysis, and visualisation of full-waveform lidar. In: Papadavid G, Themistocleous K, Michaelides S, Ambrosia V, Hadjimitsis DG, editors. Seventh Int Conf Remote Sens Geoinf Environ. SPIE; 2019. p. 68. Available from: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11174/2537915/Open-source-software-DASOS%2D%2Defficient-accumulation-analysis-and-visualisation/10.1117/12.2537915.full. [cited 2019 Sep 16]
  81. 81.
    Miltiadou M, Warren MA, Grant M, Brown M. Alignment of hyperspectral imagery and full-waveform lidar data for visualisation and classification purposes. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci. 2015;XL-7/W3:1257–64 Available from: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-7-W3/1257/2015/.CrossRefGoogle Scholar
  82. 82.
    Cisco Visual Networking Index: Forecast and Trends, 2017–2022 - Cisco. Available from: https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html. [cited 2019 Feb 12]
  83. 83.
    Verma U, Butler H. PLASIO 2019. Available from: https://github.com/verma/plasio. [cited 2019 Jan 1]Google Scholar
  84. 84.
    Pirotti F, Brovelli MA, Prestifilippo G, Zamboni G, Kilsedar CE, Piragnolo M, et al. An open source virtual globe rendering engine for 3D applications: NASA World Wind. Open Geospatial Data. Softw Stand. 2017;2:4 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0016-5.Google Scholar
  85. 85.
    Miller JM. Lidar Visual Analysis and Editing. Available from: https://people.eecs.ku.edu/~jrmiller/WorldWindProjects/lidar/index.html. [cited 2019 Feb 11]
  86. 86.
    Nandigam V, Crosby C, Arrowsmith R, Phan M, Lin K, Gross B. Advancing Analysis of High Resolution Topography Using Distributed HPC Resources in OpenTopography. Proc Pract Exp Adv Res Comput 2017 Sustain Success Impact - PEARC17. New York: ACM Press; 2017. p. 1–3.Google Scholar
  87. 87.
    EarthExplorer - Home. Available from: https://earthexplorer.usgs.gov/. [cited 2019 Mar 2]
  88. 88.
    The Netherlands. PDOK - AHN3 downloads. 2019. Available from: https://downloads.pdok.nl/ahn3-downloadpage/. [cited 2019 Sep 19]Google Scholar
  89. 89.
    NLS National Land Survey of Finland. File service of open data. 2019. Available from: https://tiedostopalvelu.maanmittauslaitos.fi/tp/kartta?lang=en. [cited 2019 Sep 19]Google Scholar
  90. 90.
    Latvia. Digitālā augstuma modeļa pamatdati | Latvijas Ģeotelpiskās informācijas aģentūra. 2019. Available from: https://www.lgia.gov.lv/lv/Digitālais virsmas modelis. [cited 2019 Sep 19]
  91. 91.
    Centro Nacional de Informacion Geografica, Instituto Geografico Nacional G de E. Centro de Descargas del CNIG (IGN). 2019. Available from: http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalogo.do?codFamilia=LIDAR#. [cited 2019 Sep 19]
  92. 92.
    UK Environment Agency. LIDAR Point Cloud - Datasets - data.gov.uk. 2019. Available from: https://ckan.publishing.service.gov.uk/dataset/lidar-point-cloud. [cited 2019 Sep 19]
  93. 93.
  94. 94.
    Portale Cartografico Nazionale. FAQ - Geoportale Nazionale. 2019. Available from: http://www.pcn.minambiente.it/mattm/faq/. [cited 2019 Sep 20]Google Scholar
  95. 95.
    Portale Cartografico Nazionale. Online the New Procedure for the request of Lidar Data and / or interferometric PS - Geoportale Nazionale. 2019. Available from: http://www.pcn.minambiente.it/mattm/en/online-the-new-procedure-for-the-request-of-lidar-data-and_or-interferometric-ps/. [cited 2019 Sep 20]Google Scholar
  96. 96.
    Butler H, Loskot M. libLAS - LAS 1.0/1.1/1.2 ASPRS LiDAR data translation toolset — liblas.org. Available from: https://liblas.org/. [cited 2019 Feb 12]
  97. 97.
    Pirotti F, Ravanelli R, Fissore F, Masiero A. Implementation and assessment of two density-based outlier detection methods over large spatial point clouds. Open Geospatial Data Softw Stand. 2018;3:14 Available from: https://opengeospatialdata.springeropen.com/articles/10.1186/s40965-018-0056-5. [cited 2018 Sep 10].CrossRefGoogle Scholar
  98. 98.
    Rusu RB, Cousins S. 3D is here: point cloud library (PCL). Shanghai: IEEE Int Conf Robot Autom; 2011.Google Scholar
  99. 99.
    Kukko A, Kaartinen H, Hyyppä J. Technologies for the Future: A Lidar Overview. GIM Int. 2019; Available from: https://www.gim-international.com/content/article/technologies-for-the-future-a-lidar-overview-2. [cited 2019 Mar 3].
  100. 100.
    Pirotti F, Neteler M, Rocchini D. Preface to the special issue “Open Science for earth remote sensing: latest developments in software and data.”. Open Geospatial Data, Softw Stand. 2017;2:26 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0039-y.CrossRefGoogle Scholar
  101. 101.
    Rivera JY. Tools to operate and manage early warning systems for natural hazards monitoring in El Salvador. Open Geospatial Data, Softw Stand; 2016.Google Scholar
  102. 102.
    Corti P, Lewis B, Kralidis AT. Hypermap registry: an open source, standards-based geospatial registry and search platform. Open Geospatial Data, Softw Stand. 2018;3:8 Available from: https://opengeospatialdata.springeropen.com/articles/10.1186/s40965-018-0051-x. [cited 2018 Dec 8].CrossRefGoogle Scholar
  103. 103.
    Slawecki T, Young D, Dean B, Bergenroth B, Sparks K. Pilot implementation of the US EPA interoperable watershed network. Open Geospatial Data, Softw Stand. 2017;2:13 Available from: http://opengeospatialdata.springeropen.com/articles/10.1186/s40965-017-0025-4.CrossRefGoogle Scholar

Copyright information

© The Author(s). 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.CIRGEO, Interdepartmental Research Center of GeomaticsUniversity of PaduaLegnaroItaly
  2. 2.TESAF DepartmentUniversity of PaduaLegnaroItaly

Personalised recommendations