JIA-2018-09
1919 Yanbo Huang et al. Journal of Integrative Agriculture 2018, 17(9): 1915–1931 Remote sensing began through aerial photography in nineteenth century. Satellite remote sensing has been developed and dominating since the 1960s. In the last decade, airborne remote sensing, especially UAV-based remote sensing, has been significantly developed and applied for monitoring natural resources and managing agricultural lands. With such development of remote sensing systems and methods, the new remote sensing data product system can be formulated in more detail to be referred for airborne remote sensing data processing flow. Image fusion is often needed as well to fuse the images in multiple sources, multiple scales and multiple phases but in the same coverage to enhance the later image analysis. For example, fusion of a high-resolution panchromatic image with a low-resolution multispectral image can produce a high-resolution multispectral image. Data fusion is at data level, feature level and decision level (Hall and Llinas 2009). Remote sensing image data fusion is mostly at pixel level. Feature level and decision level fusions can be used in image classification and model-based parameter inverse. In addition, image fusion can be used in the process of image mosaic to remove exposure difference from image pieces and mosaic artifacts. UAV remote sensing systems are often operated at very low altitude, especially in precision agriculture (Huang et al. 2013b), for very high-resolution images (1 mm–5 cm/pixel). The images from UAV systems, in one hand, could be less dependent on weather condition, which could simplify or even omit atmospheric correction. On the other hand, each image would cover much less area compared to the images from satellites and aircrafts. To cover a certain area, mosaicking of multiple UAV remote sensing images is required in the processing of the images. The coverage with multiple images, in turn, provides an opportunity to build three dimensional models of the ground surface with structure from motion (SfM) point clouds through stereo vision (Rosnell and Honkavaara 2012; Turner et al. 2012; Mathews and Jensen 2013) for extraction of surface features such as plant height and biomass volume (Bendig et al. 2014; Huang et al. 2017). 3.2. Remote sensing data analysis Remote sensing data can be analyzed qualitatively and quantitatively. Qualitative remote sensing data analysis is classification-based. Remote sensing classification is critical for remote sensing data being converted to useful information for practical applications. Conventional remote sensing image classification is based on image pixel classification in unsupervised and supervised modes. The commonly used methods includes ISODATAself-organizing, maximum likelihood, and machine learning algorithms such as artificial neural networks and support vector machine (Huang 2009; Huang et al. 2010a). With the development of remote sensing technology for high-resolution data, the conventional pixel-based classification methods cannot meet practical requirements because the high-resolution images with more details may be classified into some unknown “blank” spots, which may have negative impact on later analysis. Object-based remote sensing image classification (Walter 2004; Blaschke 2010) provides an innovative idea to perform image segmentation to merge the neighboring pixels with similar spectral signatures into objects as “pixels” to classify. Quantitative remote sensing data analysis is model-based. Features, such as vegetation indices, extracted from remote sensing data, have been modeled empirically with biophysical and biochemical measurements, such as plant height, shoot dry weight and chlorophyll content, and with the calibrated models the biophysical and biochemical parameters can be predicted for estimation of biomass amount and crop yield. Radiative transfer is the fundamental theory for development of remote sensing data analysis (Gong 2009). Corresponding physically-based model simulation and parameter inverse have been the research focus to understand the mechanism of interaction between remote sensing and ground surface features. In the last few decades, the PROSPECT leaf optical properties model (Jacquemoud and Baret 1990) and the SAIL canopy bidirectional reflectance model (Verhoef 1984) have been representative in radiative transfer studies of remote sensing plant characterization. The two models have been evolved, expanded and even integrated (Jacquemoud et al. 2006, 2009) to lay the foundation leading to more advanced studies in this aspect (Zhao et al. 2010, 2014). Furthermore, remote sensing data assimilation with process-based models such as crop growth models, soil water models, is an emerging technology to estimate agricultural parameters which are very difficult to inverse only from a model or remote sensing data (Huang et al. 2015a, b, c). Recent years deep learning has been developed from machine learning for remote sensing image classification (Mohanty et al. 2016; Sladojevic et al. 2016). It was strongly believed that deep learning techniques are crucial and important in remote sensing data analysis, particularly for the age of remote sensing big data (Zhang et al. 2016). With the low-level spectral and textural features in the bottom level, the deep feature representation in the top level of the deep artificial neural network can be directly fed into a subsequent classifier for pixel-based classification. This hierarchy handles deep feature extraction with remote sensing big data, which can be used all parts in remote sensing data processing and analysis.
Made with FlippingBook
RkJQdWJsaXNoZXIy MzE3MzI3