As a result, aerial photographs are a source of valuable historical information on vegetation cover and condition Cohen et al. Aerial photographs can reduce costs involved in mapping, inventorying, and planning Paine and Kiser , and, as such, are used for applications ranging from forest inventories, disturbance mapping, productivity estimates, and wildlife management Avery and Berlin Thus, many important management decisions are routinely made on the basis of maps derived from aerial photographs Cohen et al.
Proliferation of satellite imagery over the past few decades has influenced the use and perceived utility of aerial photography in several contrasting ways table 1. Satellite imagery, with its broad spatial coverage and regular revisitation frequency, has provided researchers and managers with a cost-effective alternative to aerial photography. This alternative has contributed to a shift in emphasis of university curricula, and the training of spatial analysts, away from aerial photographs Sader and Vermillion and more toward digital platforms.
However, a lack of long-term satellite imagery prior to the s limits the use of satellite data in change-detection analyses to the past three decades, underscoring the value of longer-term aerial photographs. In addition, the spatial resolution of the most widely available and free satellite imagery is generally coarser than that of aerial photographs Tuominen and Pekkarinen One important development associated with the recent emphasis on satellite imagery, however, has been the advent of a wide range of digital image analysis techniques.
While many of these techniques were originally developed for satellite imagery, they have also expanded upon the range of analysis techniques now available for aerial photographs. Comparative advantages and disadvantages associated with traditional film-based aerial photography, digital aerial photographs, and satellite imagery. Despite the many advantages of aerial photographs, there are specific challenges for using them, especially with respect to manual aerial photograph interpretation.
Although manual interpretation by highly trained individuals remains one of the most effective and commonly used approaches for classification of aerial photographs Wulder , this technique relies greatly on the personal experience, knowledge, and expectations of the interpreter for a given location.
Thus, human interpretations are subjective, and are vulnerable to inconsistency and error. In addition, resource management agencies are beginning to face a shortage of well-trained interpreters, especially those whose skills have ideally been combined with years spent in the field. As a result, there is a need for new approaches to reduce or eliminate these difficulties associated with traditional aerial photograph analysis to help foster its continued and evolving use. Motivated by the unique information available from aerial photographs, by recent developments in digital analysis techniques, and by what we believe is a need to reinvigorate training and research in ecological management using aerial photography, we review and develop several important themes.
First we provide an overview of aerial photographs, along with a generalized discussion of the challenges inherent with their use, and highlight the ecological importance of the eight essential characteristics used in traditional, manual interpretation. Second, we examine how digitized aerial photographs may be analyzed using alternative analysis techniques to provide more consistent information using more efficient means.
We end with several examples of emerging ecological management questions that may be best addressed through the use of aerial photographs.
Our overall aim is to highlight the unique value that aerial photographs hold for ecosystem management, and explore possible synergies between new technologies and traditional approaches for using aerial photographs. Aerial photography is the collection of photographs using an airborne camera. Photographs are essentially a representation of the reflectance characteristics relative brightness of features recorded onto photographic film.
More specifically, reflectance is recorded by the film's emulsion, which is a layer of light-sensitive silver halide crystals on backing material for black and white photographs , or a series of emulsions for color photographs; Wolf and Dewitt , Lillesand et al. Filters also play an important role in determining the type of information recorded by the camera, and consist of a layer of dyes that selectively absorb and transmit target wavelengths.
As in any camera, the film is protected until briefly exposed to light through a lens and filter, during which the silver halide crystals and dyes react based on the degree of reflectance from features on the ground that fall within the camera's frame, or field of view Lillesand et al. Aerial photographs are captured most commonly as panchromatic black and white , color, or false-color infrared; however, various types of electromagnetic radiation can also be recorded onto photographic film with the use of different emulsions and filters Cohen et al.
Obtaining a photograph with an appropriate amount of contrast, or tonal variation, is paramount for accurate analysis or interpretation. Photographic contrast, or the range of values within the photograph, is a product of the film's emulsion type, the degree of exposure to light, and the film development conditions Wolf and Dewitt Contrast is also directly related to radiometric resolution, which is defined as the smallest detectable difference in exposure, or measurable difference in reflectance levels Lillesand et al.
Generally, when exposure and development conditions are ideal, any decreases in radiometric resolution smaller detectable differences in tone will result in greater contrast within the photograph.
Of fundamental importance to the quality of aerial photographs is the camera used to obtain the images. Two broad types of airborne cameras are used: film-based and digital cameras table 1. The most common type of camera used in aerial photography is film-based, single-lens frame cameras, with lenses of high geometric quality to minimize distortion Wolf and Dewitt Aerial cameras must take photographs of features from great distances; therefore, the focal length of the lens the distance from the lens to the film is fixed to focus reflectance from effectively infinite distances away Wolf and Dewitt , Lillesand et al.
The most common focal length for aerial cameras is millimeters, but longer focal lengths may be used to capture imagery from higher altitudes, which are used primarily for aerial mosaics. Aerial digital cameras are quite similar in structure; however, reflectance is recorded with electronic sensors and stored digitally instead of on film. Although images captured by airborne digital cameras are not technically photographs, such imagery will be referred to as digital photography in this article.
The scale of an aerial photograph is a function of camera focal length and the flying height of the aircraft Cohen et al. Scale can also refer to the finest or highest spatial unit of resolution grain , as well as the size of the entire scene extent.
The finest unit of resolution on a film-based photograph is not represented by uniform pixels the smallest spatial unit of resolution within an image , as is the case with airborne digital imagery or satellite imagery, but is instead dependent upon the clusters of silver halide grains within the emulsion, which tend to be irregularly sized and unevenly distributed Lillesand et al.
As silver halide grains are smaller than most digital detectors, film-camera resolution is often finer than digital-camera resolution Paine and Kiser However, the resolution of some current digital cameras can be comparable to film resolutions for systems with similar formatting and scale Lillesand et al.
Also of relevance to scale is consideration of the minimum mapping unit MMU , which represents the size of the smallest entity to be mapped. However, this is often established as part of the classification system; both the scale of the photographs and the grain will influence definition of the MMU table 2.
Common minimum mapping units and the general uses of photographs taken at different scales Lillesand et. Photographs can be grouped according to their geometry as either vertical or oblique. Vertical photographs are taken parallel to the ground, with the optical axis of the camera situated directly downward. Because of the variable conditions during photograph collection wind, turbulence, etc.
Tilted images are obtained on an angle, meaning that the optical axis of the camera diverges more than 3 degrees from the vertical Jensen , shifting the normally central focus of a photograph to another location, and thereby shifting the positions of certain features Avery and Berlin In contrast, oblique photographs are acquired with a deliberate deviation from a vertical orientation.
Although oblique landscape photographs can predate aerial photographs by decades acquired from high points on the landscape such as land surveys or airborne and can often provide rare historical information, they are more challenging to analyze systematically.
Therefore, our discussion is limited to the use of vertical aerial photographs. Two closely related disciplines with distinct end goals are involved in aerial photography: photogrammetry and aerial photograph interpretation.
Photogrammetry also called metric photogrammetry is concerned with obtaining exceptionally precise quantitative measurements from aerial photographs, whereas photographic interpretation or interpretive photogrammetry focuses more on the recognition, identification, and significance of features on photographs Wolf and Dewitt , Paine and Kiser Photogrammetric methods are highly precise, and much of this discipline evolves around techniques to address and correct photographic errors.
Interpretation methods have also been extensively developed and are relevant for understanding the types of ecological information, which can be derived from aerial photographs. Principles of both disciplines are addressed here, but we focus primarily on photograph interpretation and classification. Film-based photographs may be converted into digital format through scanning Wolf and Dewitt Photogrammetric scanners convert analog images or continuous tone photographs into digital files represented as pixels Wolf and Dewitt An inherent drawback of scanning photographs is a potential loss of radiometric or tonal variation and spatial resolution from the photograph Warner et al.
Thus, it is crucial that the scanning resolution both spatial and radiometric is sufficient to create a geometrically and visually accurate representation of the original aerial photograph. It is primarily the physical characteristics of the film and the scale of the aerial photograph that will limit the resolvable scanning resolution dots per inch, table 3 ; Jensen ; however, other factors, such as atmospheric clarity and scene contrast, can also affect resolution of photographs.
Scanning at a resolution too low will result in loss of information, whereas a needlessly high scanning resolution will lead to digital files of enormous size and storage requirements. Scanning has the advantage that any subsequent interpretation can be assisted by software capable of providing systematic analyses Fensham and Fairfax Relationship between scanner resolution and ground resolution for multiple scales of aerial photography.
Adapted from Jensen Despite the great utility of aerial photographs, it is important to note that errors often occur during the collection and digitization of photographs that can limit their use Cohen et al.
While these inaccuracies rarely render aerial photographs useless provided appropriate precautions were taken during photographic acquisition, storage, and digitization , an understanding of the major sources of error is crucial for accurate analysis.
Typically, for many ecological management purposes, geometric errors and radiometric errors are most relevant, as they may inaccurately represent photographic features. Therefore, we examine the major sources and types of photographic errors in four main categories. Errors can be classified as either geometric or radiometric in origin, and either systematic or random in form table 4.
Common errors associated with the use of aerial photographs. Source: Adapted from Paine and Kiser Geometric errors or positional errors alter the perceived location and size of features on a photograph. Geometric errors can occur due to problems with the equipment used to capture the photographs Wolf and Dewitt , stability of the airborne platform, flying and shutter speeds Paine and Kiser , and the location being photographed.
Relief displacement, in particular, results in features at higher elevations appearing larger than similarly sized features located at lower elevations Aronoff However, relief displacement is also what enables three-dimensional viewing of overlapping stereo pairs called parallax , which aids manual photograph interpretation by allowing visualization of topographic relief Jensen , Paine and Kiser , Aronoff While geometric distortion is utilized in the manual interpretation process, geometric errors are often problematic for digital analyses.
Radiometric errors errors in tone or color can be caused by the vantage point, condition, and calibration of the camera, as well as the types of filter and film emulsion Jensen Environmental sources of radiometric variability include the hour and season of image capture which affects the angle of the sun , which can cause shadow or glare. Atmospheric interference as a result of clouds and haze can also cause radiometric errors Cohen et al. In addition, the geometry of the airborne platform camera can cause variability in brightness values, which can be further confounded by sun angle, platform position, and topographic variation Cohen et al.
Next, we present some basic methods for addressing the most relevant geometric and radiometric errors, recognizing that additional errors can also affect aerial photographs table 4. For most digital classification and mapping purposes, it is necessary to use orthorectification procedures to correct for geometric displacement errors and provide spatial reference.
Orthorectification involves the spatial manipulation of a digitized or digital photograph into an orthophoto, by adding vertical map x, y , and z coordinates to accurately represent distances, angles, and areas Lillesand et al.
This process is different than georeferencing, which solely assigns horizontal map x, y coordinates to an image. The most basic need for correcting these geometric errors is a reference data set, or a set of reference coordinates, which is commonly derived from existing topographic maps, GIS geographic information system data sets, satellite imagery, orthophotos, or orthophoto mosaics.
Highly accurate reference and control data are critical because the spatial accuracy of the corrected product is dependent upon the geometric quality of the reference layer. Reference data are used to orientate the photograph to its true position through the selection of ground control points GCPs —locations or features easily identifiable on both the reference data and uncorrected photograph, ideally distributed evenly throughout the entire scene.
The target aerial photograph lacking spatial reference is shifted or warped to its true spatial position by resampling the data using the GCPs as a guide. Various resampling algorithms exist—most common are the nearest neighbor simplest and fastest , bilinear interpolation, and cubic convolution yields the smoothest image, yet is computationally intensive , and these are available within most standard orthorectification software. It is important to note that orthorectification and georeferencing are time consuming, particularly for large sets of aerial photographs.
Geometric correction of historic photographs can be particularly challenging, because changes in land cover and feature position over time can make GCP identification difficult.
In addition, orthorectification procedures can distort spectral data, therefore spatial referencing is commonly applied postclassification. The most common radiometric procedures applied to digitized aerial photographs typically involve manipulation of the image histogram distribution of the tonal and radiometric values for the entire photograph or image. Contrast, or histogram stretching, is often used to improve the visual appearance of aerial photographs, and alters the frequency distribution of original pixel values to allow for better differentiation among unclear or hazy regions.
Contrast enhancement includes procedures such as image dodging equalizes dark and light areas across an image for a monochromatically balanced product , saturation, and sharpening. Photographs acquired at various times within the day are particularly problematic, as radiometric response is highly dependent upon sun angle and atmospheric conditions.
Normalization techniques are available that work to identify similar land-cover types across photographs taken under various conditions and resample problematic photographs based on the tonal distributions of photographs with more ideal contrast.
Histogram manipulation can be achieved using a variety of software such as Adobe Photoshop and most standard image processing and analysis programs. Traditionally, information has been obtained from aerial photographs through manual interpretation table 5.
Over the years, manual interpretation has evolved from plastic overlays on hard-copy images to soft-copy systems and digitized photographs Avery and Berlin , Wolf and Dewitt Regardless of the approach, manual interpretation typically involves delineation of polygon boundaries areas with similar properties on a stereo pair and the subsequent classification of those polygons by a trained specialist.
A variety of key characteristics are used to delineate and classify polygons, including tone or color, shape, size, pattern, texture, shadows, site, and context figure 1 ; Avery and Berlin , Lillesand et al. Interestingly, these characteristics can help identify important ecological features and can also be linked to various concepts in ecology table 6. Although we discuss these eight characteristics separately, manual interpretation often requires some combination of these characteristics for feature identification.
Comparative advantages and disadvantages of manual aerial photograph interpretation, conventional pixel-based analysis, and object-based classification techniques. The eight primary aerial photograph characteristics used in manual interpretation, related ecological features, and examples of corresponding digital methods which may also be useful for analysis of these attributes.
Tone or color Variation and relative differences in tone or color on photographs radiometric properties are the primary characteristics enabling feature identification. For example, foliage of deciduous tree species often reflects more light and appears brighter than coniferous species, which are darker because they reflect less light figure 1. Tone or color can also be used to make inferences related to the state or the condition of certain features. Surficial deposits with dark tones may suggest poor drainage water absorbs and transmits energy and high organic-matter content, in comparison to lightly colored deposits that are reflective and usually indicate well-drained materials such as sand or gravel Keser Technically, both tone and color relate to the intensity of light reflected by an object or feature; with tone used to describe grayscale variation on black and white panchromatic photographs, and color referring to the hue characteristics of color photographs Avery and Berlin A result of complex interactions between the sun's radiation and the Earth's surface, tone and color are greatly influenced by conditions during photograph acquisition and digitization Avery and Berlin Therefore, it is important to compare photographic tone and color and land-cover relationships between adjacent photographs or among all photographs used in a project.
The relative and absolute size of objects or features is important not only for identifying both cultural and natural features but also for making ecological inferences about the features being identified. Size is particularly significant because of its direct connection to spatial scale, a fundamental component of understanding ecological patterns and processes.
In ecological applications, scale is often used to describe the size, or spatial unit, of a focal entity or phenomenon. Analyses can focus on multiple spatial scales, such as at the scale of individual tree crowns, where the sizes of individual trees differ according to age class figure 1 ; or at broader scales, where the relative sizes of habitat patches can provide indicators of suitability for different species Turner et al.
The absolute size of various features may also have important ecological implications. Riparian vegetation width is an illustration of this point, because riparian size is important for quantifying local protection afforded to a stream, such as in highly modified watersheds Roth et al.
Furthermore, spatial characteristics such as the distribution of canopy gaps and other forest structural properties can be identified from high spatial-resolution aerial photographs, which are important parameters relevant for many wildlife species Fox et al. Shape is particularly useful for identifying cultural features, which usually have a specific geometry and obvious edges, as well as many other natural features with distinctive forms Avery and Berlin In particular, shape can be used to identify various geomorphic features such as fluvial landforms e.
A relevant characteristic over a wide range of spatial scales, shape results from the contrast between the border of a specific feature or patch and the surrounding environment. At fine scales, aerial photograph interpreters look for recognizable shapes to classify features, such as crown shape to identify tree species, or geometry to identify anthropogenic features e.
At broader scales, patch shape can be used to distinguish between anthropogenic land use logged stands or agriculture and natural disturbances fire or insect damage , and can provide indicators of landscape complexity. Interestingly, patch edges influence many important ecosystem and landscape processes such as habitat quality Ries et al. Image texture is particularly useful for landform and land-cover classification, and is related to variation in biophysical parameters, landscape heterogeneity, and forest structural characteristics Wulder It can also be helpful for prediction of species distribution and biodiversity patterns St-Louis et al.
Aerial photograph interpreters describe texture in terms of smoothness and roughness, and the relative variation of this attribute can be used to distinguish between features Avery and Berlin For example, the textures of different forested stands provide visual indicators of stand complexity, age, and crown closure figure 1.
Texture is commonly used to help differentiate between tree species Trichon and other features that may otherwise have similar reflectance and dimensional characteristics Avery and Berlin , Lillesand et al.
Texture is also useful for identifying soil types, rangeland vegetation, various hydrologic characteristics, and agricultural crops Lillesand et al. Berberoglu et al. One disadvantage of ANN, however, is that ANN can be computationally demanding when large datasets are dealt to train the network and sometimes no result may be achieved at all even after a long-time computation due to the local minimum e.
A fuzzy classification approach is usually useful in mixed-class areas and was investigated for the classification of suburban land cover from remote sensing imagery Zhang and Foody , the study of medium-to-long term 10—50 years vegetation changes Okeke and Karnieli and the biotic-based grassland classification Sha et al. Fuzzy classification is a kind of probability-based classification rather than crisp classification.
Unlike implementing per-pixel-based classifier to produce crisp or hard classification, Xu et al. Theoretically, probability-based or soft classification is more reasonable for composite units since those units cannot be simply classified to one type but to a probability for that type. While soft classification techniques are inherently appealing for mapping vegetation transition, there is an unresolved issue of how best to present the output.
Rather than imposing subjective boundaries on the end-member communities, transition zones of intermediate vegetation classes between the end-member communities were adopted to better represent the softened classification result Hill et al.
DT is another approach of vegetation classification by matching the spectral features or combinations of spectral features from images with those of possible end members of vegetation types community or species level.
DT is computationally fast, makes no statistical assumptions and can handle data that are represented on different measurement scales. Other studies integrated soft classification with DT approach Xu et al. Pal and Mather studied the utility of DT classifiers for land cover classification using multispectral and hyperspectral data and compared the performance of the DT classifier with that of the ANN and ML classifiers, with changes in training data size, choice of attribute selection measures, pruning methods and boosting.
They found that the use of DT classifiers with high-dimensional hyperspectral data is limited while good result was achieved with multispectral data. Under some circumstances, DT can be very useful when vegetation types are strictly associated with other natural conditions e. For example, some vegetation species may only grow in areas with elevation higher than a certain level.
This can be integrated within DT to assist the classification process from imagery if such ancillary data are available.
In the study of monitoring natural vegetation in Mediterranean, Sluiter investigated a wide range of vegetation classification methods for remote sensing imagery.
Firstly, two methods of random forests and support vector machines were explored, which showed better performances over the traditional classification techniques. Secondly, rather than using the per-pixel spectral information to extract vegetation features, Sluiter applied the spatial domain, viz.
It was found that when a contextual technique named SPARK SPAtial Reclassification Kernel was implemented, vegetation classes, which were not distinguished at all by conventional per-pixel-based methods, could be successfully detected.
The similar result was also noted by Im and Jensen who used a three-channel neighborhood correlation image model to detect vegetation changes through the relation of pixels and their contextual neighbors. Based on SPARK, Sluiter continued integrating spectral information, ancillary information and contextual information and developed a spatiotemporal image classification model called ancillary data classification model ADCM. The ADCM method increased the overall accuracy as well as individual class accuracies in identifying heterogeneous vegetation classes.
As stated above, there are many classification methods or algorithms developed for image classification applications under a broad range of specific applications. Sometimes, it may increase the quality of classification results when multiple methods algorithms are jointly employed. However, cautions should be usually exercised when applying improved classifiers because these methods were often designed and developed under specific challenges to solve unique problems.
Moreover, discrimination of vegetation species from single imagery is only achievable where a combination of leaf chemistry, structure and moisture content culminates to form a unique spectral signature. Thus, imagery classification relies on successful extraction of pure spectral signature for each species, which is often dictated by the spatial resolution of the observing sensor and the timing of observation Asner and Heidebrecht ; Varshney and Arora In short, search for improved image classification algorithms is still a hot field in the remote sensing applications because there are no super classification methods that could apply universally.
In recent years, more advanced methods reflecting the latest remote sensing techniques used in vegetation mapping are seen in literature. Among them, the applications of hyperspectral imagery and multiple imagery fusion to extract vegetation cover are rapidly developed and thus deserve our special attention. Rather than using multispectral imagery, vegetation extraction from hyperspectral imagery is increasingly studied recently.
Compared with multispectral imagery that only has a dozen of spectral bands, hyperspectral imagery includes hundreds of spectral bands. AVIRIS is a unique optical sensor that delivers calibrated images of the upwelling spectral radiance in contiguous spectral channels bands with the wavelengths ranging from to nm.
The information within those bands can be utilized to identify, measure and monitor constituents of the earth's surface e. The results were satisfactory considering the success in classifying two main marsh vegetation species, Spartina and Salicornia , which covered A similar work was also conducted by Rosso et al. The results showed that the five Brazilian sugarcane varieties were discriminated using EO-1 Hyperion data, implying that hyperspectral imagery is capable of separating plant species, which may be very difficult by using multispectral images.
Although the general procedures preprocessing and classification for hyperspectral images are the same as those required for multispectral images, the processing of hyperspectral data remains a challenge.
Specialized, cost effective and computationally efficient procedures are required to process hundreds of bands Varshney and Arora To extract vegetation communities or species from hyperspectral imagery, a set of signature libraries of vegetation are usually required Xavier et al.
For certain applications, the vegetation libraries for particular vegetation communities or species might be already available. However, for most cases, the spectral signature library is established using ground truth data with hyperspectral data or through spectrometers.
As such, vegetation mapping using hyperspectral imagery must be well designed to collect synchronous field data for creating imagery signatures. The information provided by each individual sensor may be incomplete, inconsistent and imprecise for a given application. Image fusion of remotely sensed data with multiple spatial resolutions is an effective technique that has a good potential for improving vegetation classification.
It is important for accurate vegetation mapping to efficiently integrate remote sensing information with different temporal, spectral and spatial resolutions through image fusion. There are many studies focusing on the development of new fusion algorithms Amarsaikhan and Douglas ; Zhang ; Zhu and Tateishi For example, in the study of fusion for high-resolution panchromatic and low-resolution multispectral remote sensing images, Li et al. Based on the statistical fusion of multi-temporal satellite images, Zhu and Tateishi developed a new temporal fusion classification model to study land cover classification and verified its improved performance over the conventional methods.
Behnia compared four frequently adopted image fusion algorithms, namely principle component transform, brovey transform, smoothing filter-based intensity modulation and HSI and concluded that each of them improves the spatial resolution effectively but distorts the original spectral signatures to certain degrees. To solve the color distortion associated with some existing techniques, Wu et al. Rather than designing new fusion algorithms, Colditz et al. In brief, image fusion opens a new way to extract high accuracy vegetation covers by integrating remote sensing images from different sensors.
However, the challenges of fusion strategy including developing new fusion algorithms still require further studies. The products of vegetation mapping derived from remote sensed images should be objectively verified and communicated to users so that they can make informed decisions on whether and how the products can be used. A vegetation map derived from image classification is considered accurate if it provides a true representation of the region it portrays Foody ; Weber Four significant stages have been witnessed in accuracy assessment methods Congalton Accuracy assessment in the first stage was done by visual inspection of derived maps.
This method is deemed to be highly subjective and often not accurate. The second stage used a more objective method by which comparisons of the area extents of the classes in the derived thematic maps e. However, there is a major problem with this non-site-specific approach since the correct proportions of vegetation groups do not necessarily mean the correct locations at which they locate.
In the third stage, the accuracy metrics were built on a comparison of the class labels in the thematic map with the ground data for the same locations.
Measures such as the percentages of cases correctly and wrongly classified were used to evaluate the classification accuracy. The accuracy assessment at the fourth stage made further refinements on the basis of the third stage.
The obvious characteristic of this stage is the wide use of the confusion or error matrix, which describes the fitness between the derived classes and the reference data through using the measures like overall accuracy and kappa coefficient. Additionally, a variety of other measures is also available or can be derived from the error matrix.
For example, the accuracy of individual classes can be derived if the user is interested in specific vegetation groups. Although it is agreed that accuracy assessment is important to qualify the result of image classification, it is probably impossible to specify a single, all-purpose measure for assessing classification accuracy. For example, the confusion matrix and its derived measures of accuracy may seem reasonable and feasible. However, they may not be applicable under some circumstances, especially in vegetation mapping at coarse scales Cingolani et al.
One of the problems caused by the pixel-based confusion matrix evaluation is that a pixel at a coarse resolution may include several vegetation types.
As shown in Fig. Clearly, the eclipse located in the center of the pixel may be the sampling area. Since it is impractical to sample the whole pixel at a large-scale mapping, this pixel would most likely be labeled with class B in image classification considering its percentage of the occupied area. Therefore, the vegetation class between the derived class B and the referenced class A will not match and this mismatch will introduce classification errors.
In this case, the non-site-specific accuracy measures may be more suitable if not for the limitation mentioned previously. Moreover, rather than using field samples to test the classification accuracy, a widely accepted practice is to use finer resolution satellite data to assess coarser resolution products Cihlar et al.
The result evaluating for image classification still remains a hot debating topic today Foody The envelope square represents a pixel in imagery. This would lead to a mismatch between ground referenced data and classified result, which is very typical in pixel-based accuracy assessment especially at large-scale vegetation mapping. This paper covered a wide array of topics in vegetation classification using remote sensing imagery.
First, a range of remote sensing sensors and their applications in vegetation mapping were introduced to facilitate the selection of right remote sensing products for specific applications. Second, the techniques of image preprocessing and various classification methods traditional and improved were discussed on how to extract vegetation features from remote sensing images.
Particularly, the extraction of vegetation cover through the application of hyperspectral imagery and image fusion was discussed. Third, a section was dedicated to the discussion of result evaluation accuracy assessment of image classification. Although the coverage of topics was not inclusive, and not all possible problems were addressed, the basic steps, principles, techniques and methods of mapping vegetation cover from remote sensing imagery were discussed and the supporting references were provided.
In short, remote sensing images are key data sources for earth monitoring programs considering the great advantages that they have Nordberg and Evertson For instance, it is more easily obtainable to produce and update vegetation inventories over large regions if aided by satellite imagery and appropriate imagery analysis.
A growing number of studies have examined a wide variety of vegetative phenomena including mapping vegetation cover by using remote sensed data Duchemin et al.
However, although remote sensing technology has tremendous advantages over traditional methods in vegetation mapping, we should have a clear understanding of its limitations. As stated by Rapp et al. In other word, a well-fit vegetation classification system should be carefully designed according to the objective of studies in order to better represent actual vegetation community compositions. More specifically, the following points should be taken into consideration when selecting a right vegetation classification system for better classification accuracy Rapp et al.
Furthermore, because of these limitations, the to-be-classified vegetation types, categorized by physiognomic classification systems Dansereau , floristic classification systems Salovaara et al. However, this is not always true in many cases, especially when a study area is covered by vegetations of complex forms or different stages, which result in similar spectral responses among different vegetation groups or generate spectral variations for the same vegetation group Sha et al.
Difficulties or challenges are often encountered to map vegetation under such circumstances. One solution is to adopt more advanced image classification method such as sub-pixel analysis Lee and Lathrop Another way is to choose higher resolutions of imagery acquired by the right remote sensing sensors so as to increase the distinguishable possibility in image classification Cingolani et al.
Nevertheless, higher resolutions of imagery will most likely increase the cost. Although there are some standard methods for image preprocessing, there are no super image classifiers that can be uniformly applicable to all applications.
Thus, it is a challenging task, as well as a hot research topic, to apply effective classifiers or to develop new powerful classifiers suitable for specific applications.
Moreover, ancillary data, including field samples, topographical features, environmental characteristics and other digital geographic information system data layers, have been proved very helpful to get a more satisfactory result or increase classification accuracy.
It is advisable to keep in mind that the technical improvements designing more advanced classifiers or acquiring high-resolution imagery, etc. It is especially difficult to map vegetations over large areas such as at continental or global scales. Commonly, vegetation cover maps at large scales are compositions of many maps from different sources over a long time. It is not surprising that the overall accuracy of the product is not satisfactory as those national or local maps are based on heterogeneous conceptions of vegetation classification systems and produced at different periods.
Therefore, it is very preferable to conduct vegetation classification using the data acquired from the same sources and at the same period and applying the same processing methods for the entire region. The lack of such consistent and identical data mainly remote sensed data and the reference data for large regions often limits the production of vegetation maps with good quality.
Supplementary material is available online at Journal of Plant Ecology online. Google Scholar. Google Preview. Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide. Sign In or Create an Account.
Sign In. Advanced Search. Search Menu. Article Navigation. Close mobile search navigation Article Navigation. Volume 1. Article Contents Abstract.
Remote sensing sensors. Vegetation extraction from remote sensing imagery. Result evaluation. Conclusions and discussions. Supplementary Data. Remote sensing imagery in vegetation mapping: a review. Oxford Academic. Zongyao Sha. Mei Yu. Revision received:. Select Format Select format. Permissions Icon Permissions. Abstract Aims. Open in new tab Download slide. Table 1 main features of image products from the different sensors. Products sensors Features Vegetation mapping applications a Landsat TM Medium to coarse spatial resolution with multispectral data m for thermal infrared band and 30 m for multispectral bands from Landsat 4 and 5 to present.
Temporal resolution is 16 days. Regional scale mapping, usually capable of mapping vegetation at community level. Regional scale mapping, usually capable of mapping vegetation at community level or some dominant species can be possibly discriminated.
SPOT A full range of medium spatial resolutions from 20 m down to 2. SPOT 1, 2, 3, 4 and 5 were launched in the year of , , , and , respectively. But the image frequency has also begun to enable rapid detection of deforestation, illegal mining, and other changes in the landscape, as well as more efficient and accurate counting of wildlife populations. NASA is also part of this trend.
In , it plans to launch a mission called GEDI the Global Ecosystem Dynamics Investigation using lidar, a laser-based remote sensing technology already familiar to ecologists for mapping 3-D vegetation structure from airplanes.
This time, from the International Space Station, GEDI will enable scientists to determine the height and structure of the forest in any given location and precisely map aboveground biomass and carbon storage — all without applying for grants to hire an airplane or spending days flying transects.
In addition, it will improve weather and climate modeling and provide detailed measurement of temperate glaciers, lakes, and rivers for better management of water resources. It happened for him a few years ago while giving an undergraduate lecture about the movements of radio-tagged seals on Cape Cod. I was loading data on Google Earth, and just zoomed right in to see where this seal turned up, and lo and behold, the image was good enough to count seals on the beach.
And data from their own long-term radio-tagging study, showing how much time seals spend typically at sea in a given day or season, allowed the researchers to develop an algorithm for calculating the total population, rather than just the part visible on the beach.
For Cape Cod vacationers feeling that a seal haul-out has crowded them off a favorite beach, or for fishermen losing their catch to seals, news that there are now 50, gray seals on the Cape is likely to sound like an invasion. For conservationists, on the other hand, it may not even represent recovery to the original population level.
The long-running debate about the seals can become highly emotional. An accurate count is the essential starting point for deciding among such management options as keeping hands off, paying for a contraceptive darting program, authorizing nonlethal harassment, or even beginning to cull seals.
0コメント