Friday, November 28, 2014

Change Detection Tutorials

I have two new videos out demonstrating differing approaches to change detection using the 2013 Yosemite Rim Fire as the example.  Please note that both videos illustrate a highly simplified workflow using Landsat data.  Change detection is exceedingly complex and I did not go into any detail on topics such as radiometric correction.

The first video shows a pixel-based image differencing approach followed by and unsupervised classification within ArcGIS.


In the second video we move over to eCognition and take an object-based approach to change detection.

Monday, November 3, 2014

Open Street Map Mapathon, Ebola Outbreak

On Sunday, October 19th, the Spatial Analysis Lab hosted an Open Street Map (OSM) Mapathon to support the efforts of Doctors Without Borders, WHO and other aid organizations in their fight to contain the spread of the Ebola virus. This effort was coordinated by the SAL's own Noah Ahles. The event brought together UVM staff and students along with community volunteers. We digitized geographic features in Sierra Leone (roads, schools, residential areas, potential helicopter landing sites, etc.) creating a digital dataset that can be downloaded and accessed by aid workers on the ground minutes after the edits are completed.

Digital data is essential to understanding the situation in West Africa and to planning an effective response. Most of the time, areas in need of humanitarian aid do not have the geospatial data required to facilitate a proper aid response. OSM substantially increases the efficiency of humanitarian aid by providing up-to-date geographic and anthropogenic digital data within an area of interest. Furthermore, OSM is being used to calculate statistics (e.g. population estimates) and contextual data (e.g. potential routes of spread). 

One of the most powerful aspects of OSM is that no experience is required in GIS or mapping; anyone can join in and contribute. Moreover, no software is required; all of the editing is done on a web browser and can be done with a laptop, mouse and internet connection. Fourteen people attended our mapathon, many of whom had never used any mapping programs before, and together we digitized over 22,000 features.
Students and community members working with Open Street Map in the Aiken Center, UVM
To put our efforts into perspective, volunteers from all over the world have come together to map Liberia, Guinea and Sierra Leone. Since March, more than 9 million objects have been edited by 1,700 volunteers. This is a continuing effort and every edit goes a long way. The following images are areas that have been mapped recently through OSM.


Port Loko, Sierra Leone before and after OSM’s recent activation


Freetown, Sierra Leone, before and after OSM’s recent updates


Here are some great resources/tutorials if you would like to join the effort:
If you have questions or would like to join future OSM mapathons please contact Noah - noah.ahles@gmail.comThank you Alysia from Trader Joe’s, Leonardo’s Pizza and Jim from Brennan’s for supporting this event. Also, thank you Bill Morris for all his help.

Tuesday, October 28, 2014

Canopy Height Models - An Object-Based Approaches

Canopy Height Models (CHM) derived from airborne LiDAR are nearly as old as LiDAR itself.  CHMs are typically raster representations of the tree canopy, but in some cases people have used the term to describe models that represent all features above ground, whether or not they consist of only canopy.  A true CHM is one in which other above-ground features such as buildings and utility lines are removed.

Even if a CHM is accurate in the sense that it only represents tree canopy LiDAR returns there are a two primary limitations with the most CHMs.  The first is that the CHM is stored in raster format.  Raster cells don't represent actual features and thus the data are less accessible to decision makers who may have questions such as "Where are the tallest trees in our community located?" and "How many trees over 80 feet do we have in our parks?"  The second limitation stems from the fact that LiDAR are often acquired leaf-off and thus a CHM derived from leaf-off LiDAR does not represent the canopy, but rather the occasional branch and stem that generated a return from the LiDAR signal.

As part of our tree canopy assessment for Boone, Campbell, and Kenton Counties in northern Kentucky that we carried out in collaboration with Mike Galvin (SavATree) for the Northern Kentucky Urban and Community Forestry Council we developed an object-based approach to canopy height mapping that overcomes the limitations of traditional CHMs.  Our object-based approach to land cover mapping integrates leaf-on imagery (A) and leaf-off LiDAR (B) to map tree canopy (C).  This process overcomes the limitations are inherent in the imagery (no clear spectral signature for trees) and the LiDAR (leaf-off returns resulting in tree canopy gaps) to create a highly accurate tree canopy map.  In this project the accuracy of the tree canopy class was 99%.  We then feed the LiDAR (B) and tree canopy (C) into a second object-based system that creates polygons approximating tree crowns and returns the max (D) and average (E) canopy height using only those LiDAR returns that are actually trees.  The result is a vector polygon database that can be easily queried and merged with other vector datasets for subsequent analysis.

This project would not have been possible without the LiDAR data distributed through the Kentucky from Above program.  If you would like to reference the graphic we have posted it to FigShare.


Tuesday, October 7, 2014

The Role of LiDAR Attributes in Feature Extraction

Over the past few weeks I have noticed a number of questions in online discussion forums around the topic of how LiDAR point cloud attributes, such as classification and return number, can be used to help identify or automatically extract features.  We have numerous other posts detailing our automated feature extraction workflow, specifically how we use an object-based approach to extract information from LiDAR, imagery, and other data sources.  In this post I would like to turn the focus to LiDAR, specifically how the the point cloud attributes can be used to highlight above ground features such as buildings and tree canopy.

Most of the LiDAR data that we work with is acquired using the USGS specification, with an average of 1-4 points per square meter.  As LiDAR datasets are typically acquired to support topographic mapping of the earth's surface they are done during leaf-off conditions.  As a LiDAR signal will be reflected by leaves, this increases the chance that  the laser signal will reach the ground.

As long as your LiDAR data is in LAS format each point contains a wealth of information beyond the elevation.  The LiDAR point attributes we will be most concerned with in this post are the class and the number of returns.  You can find out more about both of these attributes by reading up on the ASPRS LAS specification.  The class is assigned to each point, typically by the contractor who processed the data, using a semi-automated approach.  The most basic LAS classification will split the points into ground (class 2) and unclassified (everything else) points (either 0 or 1).  The graphics below show an example of LiDAR data in LAS format first symbolized by elevation and then symbolized by classification.
LiDAR point cloud.  Each point is colored by its absolute elevation.  Blue represents the low elevations and red the highest elevations.
LiDAR point cloud symbolized by class.  Green is ground, magenta is overlap, cyan is water, and red is unclassified.  Black areas are water that contain no LiDAR points as water absorbs the LiDAR signal.
The return information comes from the LiDAR sensor.  Discrete return LiDAR data typically have up to four returns.  The graphic below shows the same point cloud in which the points have been symbolized based on the number of returns.  Dense surfaces, such as buildings and ground, have a single return (red), but trees generally produce multiple returns (green, cyan, and blue).  The less dense structure of trees (particular deciduous trees that lack leaves) creates a return at the top of the tree, then other returns of off subsquent branches, and finally the ground.

LiDAR point cloud symbolized by return number.  Red indicates a single return, green - two returns, cyan - three returns, and blue - four returns.
These representations illustrate how point cloud information can provide insight into the type of feature.  For example, trees and buildings are both tall and assigned to a class other than ground or water.  When it comes to the number of returns we see that buildings have a single return whereas trees typically have more than one return.  The process of using a combination of class and return number to differentiate between trees and buildings becomes more clear when we generate raster surface models from the LiDAR point cloud.  A Normalized Digital Surface Model (nDSM) is a gridded dataset in which each pixel represent the height of features relative to the ground.  It is created by using the ground points (LAS class 2) to create a raster Digital Elevation Model (DEM), using the first returns to create a raster Digital Surface Model (DSM), then subtracting the DEM from the DSM.  The example below shows the nDSM for the same area as the point cloud examples from above.  Buildings and trees show up as tall (red and yellow), whereas non-tall features on the landscape such as roads and grass show up as short (blue).

Normalized Digital Surface Model (nDSM).
A similar approach is used to create a Normalized Digital Terrain Model (nDTM).  A DTM is generated from the last returns. The DEM is then subtracted from the DTM to create the nDTM.  The nDTM is very effective at highlighting buildings and suppressing trees.  This is because the height of the last returns for buildings (dense surfaces) is much greater than the ground as the LiDAR signal does not penetrate buildings.  As the LiDAR signal penetrates tree canopy in most cases the height difference between the DTM and DEM is often low.

Normalized Digital Terrain Model (nDTM).
Subtracting the nDTM from the nDSM highlights trees.  This is because the height difference of the first and last returns for buildings is often identical, whereas for trees it is typically much greater.
nDTM subtracted from the nDSM.
Although these LiDAR datasets are excellent sources by themselves for mapping features they are imperfect for tree canopy extraction due to the leaf-off nature.  To overcome this limitation we take an object-based approach in which we integrate the spectral information in imagery and use iterative expert systems that take into account context to reconstruct the tree canopy, filling in the gaps in the leaf-off LiDAR.  The result is a highly-accurate and realistic representation of tree canopy.  In general LiDAR gets us 80%-90% of the way there, then imagery the rest of the way.
Tree canopy extracted using an object-based approach overlaid on a hillshade layer derived from the nDSM.
Leaf-on imagery.
For more information on how to create the surface models mentioned in this post check out the Quick Terrain Modeler video tutorials.  If you want to generate raster surface models in ArcGIS this video will show you how.

Saturday, August 23, 2014

New Urban Tree Canopy (UTC) Assessment Project Map Portal

We have a new Urban Tree Canopy (UTC) Assessment Projects web mapping portal up.  The web site lists all the UTC projects completed by the USDA Forest Service's UTC assessment team (hopefully down the road we can add others), key information about each project, along with the ability to download the project report and high-resolution land cover data.  Credit for the web map goes to the brilliant Matt Bansak with database support from the SAL's Tayler Engel.

Monday, August 18, 2014

Generating road polygons from breaklines and centerlines

A number of years ago LiDAR was acquired for the entire state of Pennsylvania through the PAMAP program.  The LiDAR data are currently available from PASDA, and are a great resource.  In addition to point cloud and raster surface models, the deliverables also included breaklines.  Breaklines are great, but they are just lines representing the edge of the roads.  What if you want to calculate the actual road surface?  Using the road breaklines in combination with existing county road centerline data we developed an automated routine within eCognition to turn the breakline and centerline data into road polygons so that the actual road paved area can be computed.  This is another example of how the term "Object-Based Image Analysis" or "OBIA" no longer fits the type of work that we are doing with eCognition.

Here is how we went about it.
1) Turn the breaklines and centerlines into image objects.

2) Compute the Euclidian distance to the road centerlines.

3) Classify objects based on their relative border to the centerlines and breaklines, and the distance to centerlines.

4) Clean up the classification based on the spatial arrangement of the road polygons.

5) Vectorize the road objects and simplify the borders (yellow lines are the vector polygon edges, pink polygons are the original image objects).


Ecology of Prestige, exploring the evidence in NYC

We have new paper in Environmental Management that uses the SAL’s signature high resolution, 7-class, LiDAR-derived land cover data (3ft version available here for free). The current study replicates and extends a previous paper that uses the SAL’s land cover data in Baltimore. Between our work on mapping, assessing, estimating the carbon abatement potential, and the tree canopy affects on asthma and air quality, this dataset is getting quite a bit of use. We hope that because the data is freely available others will continue to use these data.

After using some fancy spatial statistics we are able to show that even after controlling for population density (available space for trees), socioeconomic status (available resources for trees), there is still quite a bit of variation – much of which is explained by lifestyle characteristics.

We conclude “To conserve and enhance tree canopy cover on private residential lands, municipal agencies, non-profit organizations, and private businesses may need to craft different approaches to residents in different market segments instead of a “one-size-fits-all” approach.  Different urban forestry practices may be more appealing to members of different market segments, and policy makers can use that knowledge to their advantage.  In this case, advocates may consider policies and plans that address differences among residential markets and their motivations, preferences, and capacities to conserve existing trees and/or plant new trees.  Targeting a more locally appealing message about the values of trees with a more appropriate messenger tailored to different lifestyle segments may improve program effectiveness for tree giveaways.  Ultimately, this coupling of theory and action may be essential to providing a critical basis for achieving urban sustainability associated with land management.


The paper was co-written with Northern Research Station scientist, J. Morgan Grove and Clark University doctoral student Dexter Locke