What is Image Classification in Remote Sensing?
Digital image classification techniques group pixels to represent land cover features. Land cover could be forested, urban, agricultural and other types of features. There are three main image classification techniques.
Image Classification Techniques in Remote Sensing:
- Unsupervised image classification
- Supervised image classification
- Object-based image analysis
Pixels are the smallest unit represented in an image. Image classification uses the reflectance statistics for individual pixels. Unsupervised and supervised image classification techniques are the two most common approaches. However, object-based classification has been breaking more ground as of late.
What is the difference between supervised and unsupervised classification?
Pixels are grouped based on the reflectance properties of pixels. These groupings are called “clusters”. The user identifies the number of clusters to generate and which bands to use. With this information, the image classification software generates clusters. There are different image clustering algorithms such as K-means and ISODATA.
The user manually identifies each cluster with land cover classes. It’s often the case that multiple clusters represent a single land cover class. The user merges clusters into a land cover type. The unsupervised classification image classification technique is commonly used when no sample sites exist.
Unsupervised Classification Steps:
- Generate clusters
- Assign classes
The user selects representative samples for each land cover class in the digital image. These sample land cover classes are called “training sites”. The image classification software uses the training sites to identify the land cover classes in the entire image.
The classification of land cover is based on the spectral signature defined in the training set. The digital image classification software determines each class on what it resembles most in the training set. The common supervised classification algorithms are maximum likelihood and minimum-distance classification.
Supervised Classification Steps:
- Select training areas
- Generate signature file
Object-Based (or Object-Oriented) Image Analysis Classification
Traditional pixel-based processing generates square classified pixels. Object-based image classification is very different in that it generates objects of different shape and scale. This process is called multi-resolution segmentation.
Multiresolution segmentation produces homogenous image objects by grouping pixels. Objects are generated with different scales in an image simultaneously. These objects are more meaningful than the tradional pixel-based segmentation because they can be classified based on texture, context and geometry.
Object-based image analysis supports the use of multiple bands for multiresolution segmentation and classification. For example, infrared, elevation or existing shape files can simultaneously be used to classify image objects. Multiple layers can have context with each other. This context comes in the form of neighborhood relationships, proximity and distance between layers.
Nearest neighbor (NN) classification is similar to supervised classification. After multi-resolution segmentation, the user identifies sample sites for each land cover class. The statistics to classify image objects are defined. The object based image analysis software then classifies objects based on their resemblance to the training sites and the statistics defined.
Object-Based Nearest Neighbor Classification Steps:
- Perform multiresolution segmentation
- Select training areas
- Define statistics
Remote Sensing Data Trends
In 1972, the Landsat mission was first launched. The Landsat mission measured the Earth reflectance. Satellite image classification was done using the reflectance statistics for individual pixels. Unsupervised and supervised classification were the two image classification techniques available in the 1970’s.
Object-based image analysis is a growing method for image classification in digital image processing.
Over the years, there has been a growing demand for remotely-sensed data. Specific objects of interest are being monitored with earth observation data. Examples include food security, environmental concerns, public safety and more. In the most part, the data acquisition is from satellites and aerial photography. The new market of satellite imagery is aiming at higher spatial resolution at a wider range of frequencies.
Remote Sensing Data Trends:
- More ubiquitous
- Higher spatial resolution
- Wider range of frequencies
This improvement in the quality of remotely-sensed data does not guarantee more accurate feature extraction. The image classification techniques used are a very important factor for better accuracy.
Selection of Image Classification Techniques
Let’s say water is the feature that wants to be extracted from a high spatial resolution digital image.
A user may decide to choose all blue pixels in that image. But other pixels in the image may be mistakenly classified as water. This is the reason why unsupervised and supervised image classification has a salt and pepper look.
Humans naturally aggregate spatial information into groups. Multiresolution segmentation does this task by grouping homogenous pixels into objects. Water features are easily recognizable after multiresolution segmentation. This is how humans visualize spatial features.
- When should pixel-based (unsupervised and supervised classification) be used?
- When should object-based classification be used?
The spatial resolution of the imagery is an important factor when selecting image classification techniques. (Object based image analysis for remote sensing)
Low – medium spatial resolution: Pixels and objects are similar in scale. Traditional pixel-based and object-based image classification techniques perform well.
High spatial resolution: Objects are made up of several pixels. In this case, object-based image analysis is superior to traditional pixel based classification.
Image Classification Techniques Selection
- High resolution = Object-Based
- Medium/low resolution = Object-based/pixel-based
Unsupervised vs Supervised vs Object-Based Classification
A case study from the University of Arkansas compared object-based vs pixel-based classification. Color infrared high spatial resolution aerial imagery and medium-spatial resolution satellite imagery was used.
Overall, object-based classification outperformed both unsupervised and supervised pixel-based classification methods in this study. The higher accuracy was attributed to the fact that object-based image classification took advantage of both spectral and contextual information in the remotely sensed imagery. This study is a good example of some of the limitations of pixel-based image classification techniques.
Growth of Object-Based Classification
There has been much growth in the advancements in technology and the availability of high spatial resolution imagery. But image classification techniques should be taken into consideration as well. The spotlight is shining on the object-based image analysis to deliver quality products.
According to Google Scholar’s search results, all image classification techniques have shown steady growth in the number of publications. Recently, object based classification has shown much growth.
This graph displays Google Scholar’s yearly search results using the “AllinTitle:” search phrase.
If you enjoyed this guide to image classification techniques, I recommend that you download the remote sensing image classification infographic.
1. Blaschke T, 2010. Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing 65 (2010) 2–16
2. Object-Based Classification vs Pixel-Based Classification: Comparitive Importance of Multi-Resolution Imagery (Robert C. Weih, Jr. and Norman D. Riggan, Jr.)
3. Multiresolution Segmentation: an optimization approach for high quality multi-scale image segmentation (Martin Baatz & Arno Schape)
4. Trimble eCognition Developer: http://www.ecognition.com