Georeference, Image Registration, & Orthorectification
In a perfect world when using imagery to help solve a problem – such as delineating an oil spill, or evaluating crop yields in a precision agriculture project – we would always work with imagery where the information stored within each pixel represents a true location in XYZ, 3-dimensional space. Unfortunately that is not always the case and many times sensors do not deliver product with the precision location needed. Other sensors may generate geolocation information that is not embedded or otherwise associated with the image. In these cases and others, what are some of the options to improve the correlation between information within a pixel and the location it represents in XYZ space?
In the event that an image is delivered without any geolocation information, it is often necessary to utilize an outside source like a DOQQ or even an online map such as Google Maps covering a corresponding area. In a “worst case” scenario it might be necessary to find a feature in the reference source that is also present in your image. Intersections or other non-changing man-made objects make good features to reference. Find the geolocation of that feature from the reference map and attribute the corresponding image pixel location with that information. Your image is now accurately tied to a location on Earth.
Many projects call for data coverage over a very large area.Sometimes image collections not only cover that area but also have a high degree of overlap. One of the most common needs with such a collection of imagery is to stitch the images together to form a mosaic. In order to create a good mosaic, the images should first be co-registered to account for any temporal anomalies.
These anomalies are sometimes introduced due to inherent IMU (inertial measurement unit) error between image captures. In the emerging world of unmanned aircraft collects – it is also common to see images that are roughly geolocated but may not be rotated or have some other shift between collections. In these cases especially, good image co-registration is essential before building a mosaic.
There are several ways to perform image registration. One image can be registered to another where the “base” image represents the more accurate image and the image that is getting registered becomes warped to the reference. One can also perform image to map registration where the process is just like it sounds, the image is warped to the basemap. Advanced algorithms such as HYPARE, NMI, and others are also available in modern software packages to perform highly accurate image registration.
Images can be orthorectified – which is the process of truly tying a pixel to a real location in 3-dimensional XYZ space – using a mathematical model with rational polynomial coefficients (RPCs) or using a geometric model which considers an internal sensor model. These methods are known respectively as RPC Orthorectification and Rigorous Orthorectification.
Many modern sensors include RPCs with image delivery for just this purpose. For a collection of images that do not have associated RPCs, if you know some key properties about the internal camera orientation and external environment it is often possible to build RPCs. Automated tie point generation, a DEM, and a couple of ground control points can make the process of RPC orthorectification very accurate.
Perhaps one of the best ways to truly remove all geometric and sensor distortions and get a true representation of the location of each pixel in XYZ space is to use a rigorous orthorectification approach. For additional information and demonstration of these capabilities of using ENVI, join me for an ENVI Rapid Learning Series webinar on November 20! Register today and I’ll see you there!