Extract impervious surface with Multi-source remote sensing data.

My placement has started on June 3rd, and it has been two weeks now. During this time, I have completed some works and made progress.

Before my placement started, I met with Annemarie, my academic supervisor, and she gave me some feedbacks and proposed some questions about my proposal, which lead me to a good point to start my work. At the kick-off meeting with my host supervisor, we discussed some of the questions, and decided to use Orthoimagery from Ayres as data source. The data contains Red, Blue, Green and Near Infrared bands, and has a resolution of 0.5 feet. The study area is focused on Middleton. And impervious surface will be used to identify urban area.

At the beginning, we didn’t have the data about Middleton. So, I used NAIP image of Middleton area to conduct a supervised classification to check if this method works for high spatial resolution images. The model and classification are done by Orfeo Toolbox, which is an open-source project for remote sensing and are accessible from Python. The Overall Accuracy of Random Forest is 99%, and the classification map looks good.

NAIP image (left) with training sites(red) and testing sites (yellow), and classification map (right), with urban in grey, vegetation in green and water in blue

In the end of spring semester, Mutlu has taught us how to use Orfeo Toolbox, which gave me some basic knowledge about it. The tool is convenience and enables me to process image with less efforts. However, I am still learning how to access it from Python, and how to use Python to do batch processing.

Then we changed the data to Orthoimagery. We used images of Madison, but the results were always bad, no matter I used supervised classification or Objected-based segmentation. I later discovered that the Madison imagery only has fill data in the NIR band, with DN values of all pixels set to 255.

Orthoimagery of Madison area (top-left), supervised classification result (top-right), and objected-based classification (bottom-left)

The solution is that we switched to a different county in Wisconsin where the NIR band is good and have similar resolution to the resolution of Madison images. And the result looks much better in this image. This indicates that supervised classification can works well in high spatial resolution images.

From left to right are true color image, false color image, Random Forest classification map and SVM classification map

This Tuesday, we received the data of Middleton area from Ayres, which includes orthoimagery of 2017 and 2014, and LiDAR data of 2017 and 2014. Thanks to ENVIRST 556 and Konrad, I have learned how to derive DEM and DSM from LiDAR data. The difference of DSM and DEM gives the heights information of ground objects.

Orthoimagery provides spectral information, but in shadow areas, the spectral information is not very useful. Besides, some fields or soil ground may have a similar spectral information with roofs in urban area, making it hard to distinguish them. But LiDAR data can recognize roads, grassland, roof and trees, and contribute to identifying the land cover type under shadows. There exist obvious height differences between urban features, like buildings and road, and trees and grassland, which compensates height information for traditional remote sensing classification. Consequently, the height information derived from LiDAR data will become a new band and added to orthoimagery. And the new image will be used for supervised classification.

2D view (left) and 3D view (right) of LiDAR data

These two weeks is a good start for my placement. And I have put a lot of what I’ve learned into practice, like supervised classification, objected-based analysis, deriving LiDAR products, Orfeo Toolbox and Python coding. In the following days of my work, I am going to learn and practice more skills and techniques. And hope things will go smoothly as I expect.

Leave a comment