In hyperspectral (HS) imaging, for every pixel a spectrum of wavelengths is captured. These spectra represent material properties, i.e. the spectral signatures. So, classification of HS imagery is based on material properties. This thesis describes a framework to perform pixelwis
...
In hyperspectral (HS) imaging, for every pixel a spectrum of wavelengths is captured. These spectra represent material properties, i.e. the spectral signatures. So, classification of HS imagery is based on material properties. This thesis describes a framework to perform pixelwise classification of HS images of a fixed scene subject to varying ambient conditions. TNO has recorded HS images over the course of one year, every hour between sunrise and sunset. Therefore, this data set is subject to a range of lighting, weather and seasonal conditions, degrading the recorded data. Traditionally, atmospheric models are used to correct for these effects, recovering spectral information. In this work a Fully Convolutional Neural Network (FCN) is trained to perform the segmentation task which also learns to correct for ambient conditions, eliminating the need of implementing an atmospheric model or applying image normalization.
A single FCN (U-Net) is implemented to solve a five-class segmentation problem, distinguishing broad leaf trees, grass, sand, asphalt and artificial grass. To start training a neural network, the training data set requires a corresponding ground truth. A sparsely annotated mask is designed which fits images covering the entire recording period. In single-scene segmentation, annotating using a sparse mask is a quick method which allows for moving object borders. Furthermore, it avoids inclusion of mixed pixels. In order to reduce computational load, the training data set is formed by many small patches taken from the original HS images. Consequently, the network is trained with local spectral-spatial information. Training the standard U-Net proves to be limited to training data sets under relatively constant ambient conditions. In order to further enhance generalization over seasons, the network weights are rearranged so that a similar number of weights is maintained. This network is essential for training a complex training data set, as it is able to extract more informative features.
The standard U-Net trained with a simple training data set (i.e. relatively constant ambient conditions) achieves an accuracy A > 94% for both sunny and rainy test images irrespective of time of the day. However, this model is valid for a limited period of time. The customized U-Net trained with a complex training data set (i.e. highly varying ambient conditions) yields segmentations with an accuracy ranging between 86 and 93%. This model is valid for a longer period of time, covering multiple seasons. So the experiments show that there is a trade-off between segmentation accuracy and duration of model validity, which is controlled by network weight arrangement.