Think objects, not pixels
How amazing would it be if you could digitize all your features in an image with just a click of a button?
On top of that, you classify each feature with another click of a button?
Sounds like magic? But these two processes are segmentation and classification performed in Object-based Image Analysis (OBIA).
Let’s examine what it is and how you can use it to get your work done more efficiently and accurately.
Segmentation is key to classification
Human visual perception almost always outperforms computer vision algorithms.
For example, your eyes know a river when it sees one. But a computer can’t recognize rivers from lakes.
…Or can it?
Traditional pixel-based image classification assigns a land cover class per pixel. All pixels are the same size, same shape and don’t have any concept of their neighbors.
However, OBIA segments an image grouping small pixels together into vector objects. Instead of a per-pixel basis, segmentation automatically digitizes the image for you.
What segmentation does is replicate what your eyes are doing.
But with these segmented objects, you use its spectral, geometrical and spatial properties to classify into land cover.
Otherwise, when you use traditional image classification techniques, you often get a salt and pepper look in the classification result.
To recap, the two basic principles of OBIA are:
SEGMENTATION: Break the image up into objects representing land-based features.
CLASSIFICATION: Classify those objects using their shape, size, spatial and spectral properties.
Let’s delve a bit deeper in these two concepts.
Generate meaningful objects with segmentation
When you segment an image, the process groups pixels to form objects. Suddenly, land cover features start popping out, similar to how your eyes process your surroundings.
For this 50cm resolution image, the multi-resolution segmentation algorithm breaks up an image in ECognition Definiens Developer. Based on your compactness and shape settings, this is the preliminary step in OBIA.
How big do you want the objects to be? There’s a scale parameter that you can estimate to generate more meaningful objects.
Also, you can configure weights for all the layers you want to segment. This means that you don’t only have to segment by red, green or blue, but you can also segment a DEM, DSM, NIR or even LiDAR intensity.
Similarly, the segment mean shift in ArcGIS is an alternative method of object-based image analysis. However, you don’t have as many options as Trimble ECognition.
For example, you can’t set the weights of several layers when you run the process. What you can do is set the spectral and spatial detail, along with the minimum size in pixels. With a bit of a trail and error, we used the raster calculator to set custom weights using a nDSM and the red band as input.
Classify land cover features
After you segment the image, it’s time to classify each object. You are now able to classify because each object has statistics associated with them. For example, you can classify objects based on geometry, area, color, shape, texture, adjacency and more.
While options are limiting in ArcGIS, this is where the true power lies in Trimble Ecognition. In this example, there are seemingly endless statistics to classify buildings. But which statistic is the correct one to use?
Admittedly, there is no best way to classify land cover features using OBIA. However, analysts frequently use these statistics to classify land cover using OBIA:
WATER is flat (low nDSM), it accumulates into depressions (high TWI or low TPI), it has a low temperature (thermal infrared – TIRS) and it has high near-infrared absorption (negative NDVI)
TREES have varying heights (high nDSM standard deviation) and has high near-infrared reflectance (high NDVI).
BUILDINGS are often rectangle (high rectangular fit), are tall (high nDSM) and have high slopes.
GRASS is short (low nDSM), it’s flat (low nDSM standard deviation) and has moderate near-infrared reflectance (moderate NDVI).
ROADS reflect a lot of light (high RGB), they are flat (low nDSM), have a high light intensity, and it has a low or negative NDVI.
You can set up rulesets, which are a set of pre-defined steps to segment and classify objects. Similar to model builder in ArcGIS, it steps through each process until it finishes.
Alternatively, Trimble ECognition has a nearest neighbor classification where you add and classify based on defined samples.
Sharper images = More advanced image classification
In 1972, Landsat-1 sparked a revolution in how we monitor our Earth. With the US government relaxing regulations on high-resolution satellite data, the uptrend in sharper imagery is simply remarkable.
It’s not only satellites like Worldview or Planet Labs but the usage of LiDAR and drones are seeing a healthy uptick. And the way we classify images have progressed from unsupervised to a more sophisticated object-based image classification.
When a single pixel contained several buildings in a Landsat-1 scene, there wasn’t a need to do object-based image analysis. However, the new breed of high-resolution data requires object-based image analysis.
For example, a Landsat-1 scene couldn’t decipher between buildings from parks. In this case, unsupervised and supervised classification was enough. But now, you segment and classify high-resolution data using OBIA for more meaningful land cover. This is the trend in the remote sensing community.
Otherwise, traditional image classification techniques gives an unwanted salt and pepper classification
OBIA – Object-based Image Analysis
OBIA started with cellular biologists dissecting image scans. GEOBIA (Geographic Object-Based Image Analysis) distinguishes it from its medical origin.
Crisper images, more spectral bands, and an explosion of data acquisitions can help solve today’s problems.
To make sense of all this information, we need OBIA or object-based image analysis to automate some of the work for us.
As each day passes by, satellites collect ridiculous volumes of data silently in orbit… But what good is satellite data if you don’t know how to use it?
OBIA is about mass production. You create a ruleset, run it and edit your classification as necessary.