Multiple Data Modalities: Fusion not Confusion
Anonym
While I am truly a great advocate of the phrase, “two head’s are better than one”, and might even expand the idea that “three are better than two”; when it comes to geospatial data I recognize it is a great challenge to create products when considering the enormous amount of information & data available today.
An obvious approach to make the most accurate decisions is to consider all the information we have available; but when it comes to big data and multiple modalities, how is that achieved? One answer you may have considered is “Data Fusion.” While the topic can mean many things to many people, I generalize to be the ability to make more informed decisions based on the ability to view a single problem from many different points of view.
A recent application to highlight Data Point was a study utilizing LiDAR data with Multi-Spectral Imagery (MSI) to evaluate the proximity of houses to various tree types in the Boulder, CO foothills. The right tools for this job required the capability to process & analyze the spectral information from the MSI data, the height information from the LiDAR point cloud, and also size, texture, height, and shape of objects within the data.
The process behind this approach was to:
1) Generate a digital elevation model (DEM), a digital surface model (DSM), and building footprint vector information from the LiDAR point cloud.
2) Take advantage of the spectral information of the MSI data for tree species differentiation.
3) Build a layer stack of this information and enter object-space where attributes such as size, texture, height, and shape have meaning.
4) Process the data sets in context and build a layer composite representing all of the information in a final comprehensive product.

Imagery data courtesy of DigitalGlobe (WorldView-2). LiDAR data courtesy of Boulder Creek CZO and National Center for Airborne Laser Mapping (NCALM).
As you can see, the results clearly differentiate coniferous and deciduous trees while excluding shrubs, impervious surfaces, and undergrowth. The building footprint vector layer is easily overlaid for a comprehensive look at the area of interest.
Do you have a data fusion success story? How has data fusion positively impacted your geospatial analysis?