It has been said before, and it is certainly worth repeating, that object-based image analysis (OBIA) represents a paradigm shift in the analysis of remotely sensed data. As the name implies, OBIA centers on the application of segmentation algorithms to group pixels into objects. Objects, particularly when it comes to high-resolution imagery, are far more meaningful than pixels as they have spectral, spatial, and topological information.
In diving through the peer reviewed literature over the past several years there seems to be very little written about insuring that the objects created from segmentation algorithms actually represent the features of interest. Unfortunately, most of the literature has focused on comparisons of image segmentation algorithms and discussions relating to “optimal” segmentation parameters. While intellectually stimulating, these writings ignore the fundamental issue in land cover mapping applications of OBIA - image objects need to approximate polygons that a human would digitize. I would argue that “optimal” has nothing to do with the particular algorithm or the parameters used, and everything with the quality of the output. It is only when objects represent features of interest that we can begin to apply some of the key elements of image interpretation, such as size and shape, to assign those objects to meaningful classes.
So what is the best way to create meaningful objects? We need to recognize that any segmentation algorithm is just an algorithm. That algorithm has no way of knowing the objects you are interested in, nor can we expect it to replicate human cognition is a single run. Over the past few months the SAL has been putting a lot of thought into this process of creating meaningful objects. We’ve come to the conclusion that the biggest problem with the vast majority of papers presented in the peer reviewed literature is that they follow a simple linear process in which image objects are created through the use of a segmentation algorithm, and subsequently assigned to a land cover class.
We believe that the best way to create meaningful objects, and in turn, accurate land cover maps, is to take an iterative approach. In this iterative approach there is still an initial segmentation, but once we have image objects they are subject to additional functions beyond classification such as morphology, fusion (with other image objects), and re-segmentation. In many cases we find it takes a combination of these processes, sometimes via loops, to create meaningful objects.
Below is a video recap from a presentation I gave at the AmericaView Fall Technical Meeting held at the USGS EROS Data Center on this topic. If you are a Definiens user and would like to take a look at an example rule set and project that employs this methodology you can download the one I demoed at the AmericaView Fall Technical Meeting from the eCognition Community. The video below is a short recap of the talk I gave on this subject.