Register      Login
Functional Plant Biology Functional Plant Biology Society
Plant function and evolutionary biology
RESEARCH ARTICLE (Open Access)

Coupling of machine learning methods to improve estimation of ground coverage from unmanned aerial vehicle (UAV) imagery for high-throughput phenotyping of crops

Pengcheng Hu https://orcid.org/0000-0001-7958-1407 A , Scott C. Chapman A B and Bangyou Zheng https://orcid.org/0000-0003-1551-0970 A C
+ Author Affiliations
- Author Affiliations

A CSIRO Agriculture and Food, Queensland Biosciences Precinct 306 Carmody Road, St Lucia 4067, Qld, Australia.

B School of Food and Agricultural Sciences, The University of Queensland, via Warrego Highway, Gatton 4343, Qld, Australia.

C Corresponding author. Email: bangyou.zheng@csiro.au

Functional Plant Biology 48(8) 766-779 https://doi.org/10.1071/FP20309
Submitted: 2 October 2020  Accepted: 14 February 2021   Published: 5 March 2021

Journal Compilation © CSIRO 2021 Open Access CC BY

Abstract

Ground coverage (GC) allows monitoring of crop growth and development and is normally estimated as the ratio of vegetation to the total pixels from nadir images captured by visible-spectrum (RGB) cameras. The accuracy of estimated GC can be significantly impacted by the effect of ‘mixed pixels’, which is related to the spatial resolution of the imagery as determined by flight altitude, camera resolution and crop characteristics (fine vs coarse textures). In this study, a two-step machine learning method was developed to improve the accuracy of GC of wheat (Triticum aestivum L.) estimated from coarse-resolution RGB images captured by an unmanned aerial vehicle (UAV) at higher altitudes. The classification tree-based per-pixel segmentation (PPS) method was first used to segment fine-resolution reference images into vegetation and background pixels. The reference and their segmented images were degraded to the target coarse spatial resolution. These degraded images were then used to generate a training dataset for a regression tree-based model to establish the sub-pixel classification (SPC) method. The newly proposed method (i.e. PPS-SPC) was evaluated with six synthetic and four real UAV image sets (SISs and RISs, respectively) with different spatial resolutions. Overall, the results demonstrated that the PPS-SPC method obtained higher accuracy of GC in both SISs and RISs comparing to PPS method, with root mean squared errors (RMSE) of less than 6% and relative RMSE (RRMSE) of less than 11% for SISs, and RMSE of less than 5% and RRMSE of less than 35% for RISs. The proposed PPS-SPC method can be potentially applied in plant breeding and precision agriculture to balance accuracy requirement and UAV flight height in the limited battery life and operation time.

Keywords: ground coverage, UAV, remote sensing, high-throughput phenotyping, mixed pixels, plant breeding.

Introduction

Ground coverage (GC) is a key physiological trait that correlates to the water and energy balance of the soil–plant–atmosphere continuum, such as canopy light interception (Purcell 2000; Campillo et al. 2008; Gonias et al. 2012), plant water use (Suzuki et al. 2013) and soil water evaporation (Mullan and Reynolds 2010). It is already being served as a predictor of crop canopy traits, such as aboveground biomass, grain yield, leaf area index and crop nitrogen status (Pan et al. 2007; Lati et al. 2011; Nielsen et al. 2012; Lee and Lee 2013; Liebisch et al. 2015). Ground coverage also has been utilised as a cultivar selection criterion in crop breeding to characterise genotypic differences, as it is linked to plant vigour affecting light interception early in the season (i.e. early vigour) and leaf senescence affecting canopy photosynthesis late in the season (i.e. stay-green) (Mullan and Reynolds 2010; Kipp et al. 2014; Walter et al. 2015).

GC is defined as the proportion of the ground area covered by the green canopy (Adams and Arkin 1977). Conventional estimation of GC has typically been achieved through destructive sampling methods, which are time-consuming and restricted (i.e. small breeding plots may not allow multi-temporal destructive measurements). Conventional non-destructive methods are based on visual scoring, such as the paper drawings of sampling regions, which are subjective as they rely on the expertise of operators and may not have sufficient accuracy to distinguish genotypic differences (Campillo et al. 2008; Mullan and Reynolds 2010). Alternatively, a field survey with conventional hand-held digital cameras may offer good precision in GC analysis but is slow (Mullan and Reynolds 2010; Bojacá et al. 2011). These field survey methods are inefficient, laborious, biased and expensive, especially for their applications in phenotyping large-scale agronomy and breeding trials. It is thus necessary to develop high-throughput phenotyping techniques for non-invasive, non-destructive and timely characterisations of phenotypic traits (e.g. GC) for thousands of plots (Großkinsky et al. 2015; Pauli et al. 2016; Hu et al. 2018).

With recent advances in unmanned aerial vehicle (UAV) platform and camera sensor, UAVs have been transformed into high-throughput phenotyping platforms to capture remote-sensed images with high spatial resolutions, which provides unique opportunities to estimate GC. Compared with satellite or aerial remote-sensing platforms, UAVs are cost effective and have greater flexibility in terms of the temporal and spatial resolution of data collection. Further, UAVs are less constrained by field conditions that may restrict the access and movement of operators or ground vehicle-based platforms (Chapman et al. 2014; Sankaran et al. 2015; Shi et al. 2016; Jay et al. 2019). UAVs can screen a field in a short timeframe via predesignated flight routes, speeds and altitudes and specified onboard sensors, varying with the objectives of the experiments (Chapman et al. 2014; Martínez et al. 2017). UAV, therefore, has been becoming an attractive platform for field-based high-throughput phenotyping (Sankaran et al. 2015; Yang et al. 2017) and aerial survey of agronomy (Gago et al. 2015; Shi et al. 2016). Imagery captured by diverse onboard sensors, including visible, multispectral and thermal cameras, which have been applied for estimations of diverse plant traits including GC (Torres-Sánchez et al. 2014; Duan et al. 2017; Ashapure et al. 2019; Zhang et al. 2019).

Image analysis techniques in remote sensing have been applied to analyse UAV imagery for GC estimation. The key step in the estimation of GC is classifying vegetation pixels from non-vegetation pixels using the principle that vegetation has different spectral signatures from non-vegetation features in the imagery (Myint et al. 2011). Visible and multispectral imaging techniques are widely used in GC estimations and vegetation mapping (Ashapure et al. 2019; Zhang et al. 2019; Bhatnagar et al. 2020a; Daryaei et al. 2020), as vegetation and non-vegetation have different spectral characteristics in the visible and near-infrared regions of the electromagnetic spectrum (Xie et al. 2008; Sankaran et al. 2015). Image classification normally can be implemented through per-pixel, sub-pixel and object-based approaches (Laliberte et al. 2007; Lu and Weng 2007; Myint et al. 2011; Torres-Sánchez et al. 2015; Tsutsumida et al. 2016), such as machine learning (ML; e.g. decision tree, support vector machine and random forest (Jay et al. 2019; Zhang et al. 2019; Ranđelović et al. 2020)) and deep learning (DL; e.g. convolutional neural network (Bhatnagar et al 2020b; Yang et al. 2020; Su et al. 2021)) based classifiers. Spatial resolution (i.e. pixel size) of imagery and the selection of classification approaches are the significant factors that influence the accuracy of image classification and consequent GC estimation (Lu and Weng 2007; Waldner and Defourny 2017; Hu et al. 2019). Finer spatial resolution offers more detailed information of vegetation (e.g. spectral feature and context), and greatly reduces the mixed pixel problem (Hsieh et al. 2001; Hengl 2006), especially in crops that have ‘fine’ profile textures; e.g. in wheat (Triticum aestivum L.)compared with corn (Zea mays L).

Flight height is a major determinant of spatial resolution as the attached camera and its configurations (e.g. sensor resolution and focal length) are fixed. A coarser resolution associated with a higher flight height increases the mixed pixel problem (Jones and Sirault 2014; Waldner and Defourny 2017), particularly for narrow-leaved crops (e.g. wheat) and for early growth traits (e.g. early vigour), since smaller or narrow leaves maybe a few pixels wide or even are undetectable in the image (Campilho et al. 2006; Myint et al. 2011; Prieto et al. 2016; Gu et al. 2017; Hu et al. 2019). Moreover, a finer resolution requires lower flight heights and/or high-resolution cameras with long focal lengths, which limits geographical area covered per unit of UAV flight time, requires more flights to cover a large-scale field due to short battery life (i.e. ~15–30 min in general) and increases the investment of using the platform (Jin et al. 2017; Hu et al. 2019; Lu et al. 2019). Alternatives to fine spatial resolutions, appropriate classification approaches may improve accuracy. Sub-pixel classifiers have the potential to deal with the mixed pixel problem to achieve more accurate GC estimations through quantifying percent distribution of land covers in coarse imagery. Object-based approaches have shown the ability to outperform per-pixel approaches in classification through overcoming the high spectral variations in the same cover classes on fine-resolution images (Yu et al. 2006; Lu and Weng 2007; Blaschke 2010). Applying new advanced classification approaches is more practical than the pursuit of fine resolutions from low-height flights, as it is not practical for UAV survey. Therefore, the development of more powerful image classification methods should improve the estimation accuracy of GC from UAV images.

The objectives of this study are: (1) to propose a new approach coupling of classification and regression trees to estimate GC from UAV remote sensing imagery with coarse resolutions; and (2) to evaluate the performance of the new approach through comparison with a classification tree-based per-pixel segmentation method (Guo et al. 2013) on synthetic and real UAV image sets.


Materials and methods

The new proposed approach to estimate GC is based on two-step machine learning: (1) a classification tree-based per-pixel segmentation (PPS) method (Guo et al. 2013) to segment fine-resolution reference images into binary reference images; then (2) a regression tree-based sub-pixel classification (SPC) method to establish the relationship between the degraded reference images and degraded binary reference images (Fig. 1). The new approach is hereafter referred to as PPS-SPC method. The performance of the PPS-SPC method was evaluated using two types of image sets; i.e. synthetic UAV image sets (SISs) with different spatial resolutions during the wheat growing season and real UAV image sets (RISs) captured at different flight heights for wheat and weed by an RGB camera attached to a UAV.


Fig. 1.  Flowchart illustrating the proposed method to estimate ground coverage from UAV imagery. The method is composed of two steps of machine learning’, i.e. classification tree-based per-pixel segmentation (PPS) and regression tree-based sub-pixel classification (SPC) methods.
Click to zoom

Experiment data

Synthetic image sets

Synthetic image sets (SISs) comprised the fine-resolution reference images and their corresponding degraded coarse-resolution images. Degraded images were used in this study to mimic the images taken by a UAV platform at different flight heights, avoiding the effects of environments and camera configurations on real image acquisition and quality, so that our study could focus on the evaluation of the proposed method.

Reference images were collected in a wheat field experiment. The wheat trial was carried out in 2016 at an experimental field in Gatton Campus, the University of Queensland, Australia (27.57°S, 152.33°E). The field was 161 m in length and 54 m in width. Contrasting canopy structures were established by two irrigation treatments (i.e. irrigation and rain-fed), two nitrogen treatments (i.e. high and low nitrogen) and seven cultivars (i.e. Gregory, Suntop, 7770, 7770tin, Spitfire, Hartog and Drysdale). The trial contained 28 treatments in total and each treatment had three replicated plots (i.e. 84 plots included in the trial). Each plot was 7 m long and 2 m wide and comprised seven rows. Wheat was sown on 21 May 2016 with a plant density of 150 plants m–2 and a row spacing of 22 cm.

Reference images were manually captured by a digital camera (Canon 550D, maximum resolution 5184 × 3456 pixels) at an interval of about one week before flowering (six sampling times in total) to cover the range of GC from ~0 to 100% (Table 1). The weather on image sampling days was cloudless and windless. In each sampling time, two images were respectively captured at different representative regions in each plot. The camera was set to automatic shooting mode (i.e. camera configurations including aperture, ISO and shutter speed were automatically set) with fixed focal length. The camera was held stationary at ~1.0 m above canopies to shoot nadir (or near-nadir) and sharp images; at this height, it generally captured the exact three rows of wheat plants. After each shoot, the image was carefully checked to ensure that the possible oblique image was discarded and immediately retaken. For each sampling time, images were saved with a resolution of 3456 × 2304 pixels or 5184 × 3456 pixels. A total of 1008 images (84 plots × 6 sampling times × 2 images per plot and sampling time) were collected for the six sampling times, of which five low-quality images were excluded, and the remaining 1003 images were used for further analysis. The reference images had a spatial resolution of 0.03 cm or 0.02 cm (Table 1).


Table 1.  Summary of the acquisition of reference images in the wheat trial before flowering in 2016
Click to zoom

Each reference image was degraded into a series of coarse-resolution images with several spatial resolutions (i.e. 0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5 and 4.0 cm). These spatial resolutions covered the normal range of flight heights of UAV surveys (i.e. ~4–200 m above ground level (AGL) in the field of high-throughput phenotyping and precision agriculture (e.g. Shi et al. 2016; Duan et al. 2017; Jin et al. 2017; Hu et al. 2018; Ashapure et al. 2019). The degraded images (i.e. coarse images) were generated using the cubic interpolation algorithm implemented in the R package imager. Cubic interpolation is a widely used algorithm for image degradation, which fits cubic polynomials to the brightness values of 16 nearest neighbouring pixels (4 × 4) of the calculated pixel. In total, the SISs had 10030 images, including 1003 reference and 9027 corresponding coarse (1003 reference images × 9 spatial resolutions per reference image) images for the six sampling times.

Real UAV image sets

The real UAV image sets (RISs) were captured by a UAV platform (Phantom 4 Pro, DJI, Shenzhen, China; with focal length 8.8 mm, sensor size 13.2 mm × 8.8 mm and maximum image resolution 5472 × 3648 pixels) over a field with natural-grown weeds and three wheat trials at different flight heights (Table 2). Wheat trials were conducted in 2017 with different sowing dates at the Gatton Campus, the University of Queensland, Australia (27.57°S, 152.33°E). Flight campaigns were carried out during early growth stages of wheat as spatial resolution has big impacts at lower GCs (Hu et al. 2019) (Table 2). For each flight, the first images of plots were captured by the onboard camera of the UAV platform at ~3 m AGL. For the consistency, the first image of a plot with the finest resolution was considered as the reference image of the plot. The UAV platform then climbed vertically to 100 m AGL with a constant speed of 1 m s–1, and then images were captured at a 1-s interval with fixed focal length and shutter speed <1/1200 s. Aperture mode was set to automatic and ISO was adjusted to 100 for a clear sky and mild wind conditions. The image resolution was 4864 × 3648 pixels for the weed field and 5472 × 3648 pixels for the wheat trials, and spatial resolutions of reference images were 0.09 cm and 0.08 cm, respectively. The spatial resolution was reduced to 3.08 cm for weed field and 2.74 cm for wheat trials when the UAV climbed to 100 m AGL (Table 2).


Table 2.  Summary of real UAV image sets captured over a field with natural-grown weeds and three wheat trials at different flight heights in 2016
n.a., not applicable
Click to zoom

The RISs were processed for plot segmentation (Fig. 2) through a cloud-based platform (PhenoCopter) (Chapman et al. 2014; https://phenocopter.csiro.au) designed for UAV surveys in breeding and agricultural experiments. The RISs were processed in the Pix4DMapper software (Pix4D SA, Switzerland, ver. 4.3.4; https://pix4d.com) to generate undistorted images (i.e. images after geometric corrections) and ortho-mosaics. Undistorted images containing the same scene captured at specific heights (i.e. 5, 10, 20, 30, … , and 100 m AGL) were selected and used in further analysis. A workflow was applied to divide ortho-mosaics into individual virtual plots and then to extract regions of plots from the undistorted images using the reverse calculation method (Duan et al. 2017). Results of the reverse calculation were carefully checked to make sure the same plots were extracted from different images (data not shown). A total of 146 plots were extracted for the four image sets (Table 2), and other plots without the 11 flight heights were discarded. Consequently, the RISs were composed of 1606 images in total (i.e. 146 plots × 11 heights, each plot extracted from undistorted image referred to as an image in the further analysis for consistency in the SISs).


Fig. 2.  Example of the reverse calculation results of individual virtual plots (blue grids) on UAV images captured at (a) ~5 m, (b) 10 m, (c) 20 m and (d) 40 m above ground level. The same individual plot on different images was highlighted with the red colour. Ground markers were placed on the field to facilitate the image processing in Pix4DMapper and not used as corners to represent virtual plots.
F2

Estimation of ground coverage using PPS-SPC methods

Segmentation of reference images

Reference images with fine resolutions were segmented by a decision tree-based PPS method (Guo et al. 2013). The method implements binary classification of image pixels to generate a binary image of pixels (i.e. ‘0’ for none-vegetation class and ‘1’ for vegetation class). Consequently, vegetation proportion in each pixel is 0 or 100%. Here, we briefly describe the method; for more information, refer to Guo et al. (2013). A training dataset was first constructed for training a decision tree-based PPS model. Regions of interests (ROIs) for vegetation and none-vegetation class were manually selected from images. As the performance of a decision tree-based model relies on the training data, the selection of ROIs should the cover representative scenes considering heterogeneous natural light conditions. Colour features of the pixels (i.e. a* of CIEL*a*b* colour space, R of RGB colour space, Cb and Cr of YCbCr colour space, S of HSV colour space, S of HIS colour space, u* and v* of CIEL*u*v* colour space) were derived using their RGB bands. The training dataset comprised the corresponding colour features and class memberships (‘0’ for none-vegetation class and ‘1’ for vegetation class) of the pixels. A decision-tree based model was then trained with the colour features and class memberships of the training dataset. The trained model was utilised to conduct segmentation on images, it can predict the class membership of each pixel and generate binary images (Fig. 3) to present the class membership (‘0’ for none-vegetation class and ‘1’ for vegetation class). Reference GC was finally computed as the ratio of the number of vegetation pixels to all the pixels of the reference image. The method obtained high accuracy of vegetation segmentations and GC estimations of diverse crops from fine-resolution images under natural light conditions, e.g. rice (Oryza sativa L.), wheat, sorghum (Sorghum bicolor L.) and cotton (Gossypium hirsutum L.) (Guo et al. 2013, 2017; Duan et al. 2017; Hu et al. 2019). Due to the nature of binary segmentation and the effects of mixed pixels, the method could not deliver high accuracy for coarse-resolution images (Lu and Weng 2007; Hu et al. 2019).


Fig. 3.  Examples of reference and segmented images used to generate training datasets for training the SPC models of SISs. ID_r and ID_s (ID = 1, 2, … , and 6) are reference and segmented images for the six SISs.
Click to zoom

Classification of coarse images

The second step of the proposed PPS-SPC method was to construct a regression tree-based SPC model to describe the relationship between vegetation proportions and colour features of coarse pixels. The SPC model included three steps (i.e. the preparation of training dataset, establishment of regression tree-based model and calculation of GC).

The training dataset was independent of the one in the PPS method and was generated with colour features and corresponding vegetation proportions of pixels from coarse images. First, reference (i.e. colour image) and its corresponding segmented images (i.e. binary images generated from the PPS method) were selected to generate training images. The selection of the reference images should cover representative scenes of the filed considering various light conditions as well, and their corresponding segmented images were visually checked to ensure accurate segmentation (Fig. 3). The selected images were degraded to the target spatial resolution (i.e. the resolution of coarse images captured by a higher flight) using the cubic interpolation method implemented in the imager package (Barthelme 2019) mentioned above (Fig. 4). These two kinds of degraded images were then served as training images in the SPC model (Figs 1, 4). Through the degradation, vegetation proportion (continuous value from 0 to 100%) of each pixel in the degraded binary image was calculated as the percentage of vegetation pixels within a region, which overlayed in the corresponding reference image. Colour information of these pixels was obtained from its corresponding degraded reference image. Using the RGB colour information of each pixel, the same key colour features (Guo et al. 2013) as in the PPS method were derived in different colour spaces (see Fig. S1). The training dataset was generated through concatenating the corresponding vegetation proportions and colour features of the pixels as demonstrated in Fig. 4.


Fig. 4.  Example of training dataset acquisition for training the SPC method. Reference image (upper plot in a) was segmented into a binary image (lower plot in a) by the PPS method. Reference and the corresponding binary image were degraded to the target spatial (b). Colour features (green columns in c) and corresponding vegetation proportions (the orange column in c) of pixels in the degraded images were combined to create a training dataset (data table in d). Note that the illustrated images in (a) were a sub-region of a reference image and its corresponding binary image, and the degraded images in (b) are enlarged for better visualisation.
Click to zoom

The regression tree was generated using colour features and corresponding vegetation proportions of the training dataset (see the workflow in Fig. 1). The basic theory behind the regression tree was presented in Breiman et al. (1984), which is already being used in remote sensing (Hansen et al. 2002; Xu et al. 2005; Baccini et al. 2008) to describe nonlinear relationships between features (e.g. colour channels) and target variables (e.g. GC). Regression tree models were trained using the training datasets by the CART algorithm (Breiman et al. 1984). To avoid overfitting problem, the k-fold (k = 10 in this study) cross-validation was adopted for training (Zhang et al. 2019). The trained model (e.g. see Fig. S2) was then used to conduct sub-pixel classification on coarse images, which results in estimates of the vegetation proportion of each pixel. The classification results were presented by grayscale images, whose pixel values were vegetation proportion ranging from 0 to 100%. The GC of each coarse image was averaged from vegetation proportions of pixels.

Performance evaluation and statistical analysis

The proposed PPS-SPC method was evaluated on two types of UAV image sets; i.e. the SISs and RISs. The GCs of coarse images calculated by the PPS and the PPS-SPC method was compared with the corresponding reference values, respectively. Further, comparisons between the two methods were conducted to evaluate the performance of the PPS-SPC method. Some criteria, including coefficient of determination (R2), root mean square error (RMSE, Eqn (1)) and relative root mean square error (RRMSE, Eqn (2)), were used to quantify the estimation accuracy of GC.

E1
E2

where GCref and GCcrs are the GC of reference and corresponding coarse images, respectively. n is the number of reference GCs, FP20309_E1a.gif is the average of reference GCs. Image analysis, GC estimation and statistical analysis were implemented using the R programming language (R Core Team 2019) with customised scripts.


Results

Diverse distributions of GC in image sets

There was a broad variance in reference GC of the SISs ranging from 4.5% to 95.0% (Fig. 5a). Reference GC increased from 5.9% ± 1.4% (mean ± s.d.) at 12 days after sowing (DAS) to 85.7% ± 9.3% at 66 DAS. The reference GC at 44 DAS had the greatest variation (71.7% ± 8.8%). The variation of GC at each sampling date was mainly due to the contrasting canopy structures of treatments (i.e. combinations of irrigation, nitrogen and cultivar). The GC of the RISs ranged from 5.6% ± 1.0% in Set2 to 25.8% ± 4.0% in Set3 (Fig. 5b). The Set1 obtained from a field with natural-grown weeds had the largest variation (10.5% ± 6.6%). The lower GC (<50%) were selected for the RISs as the spatial resolution has big impacts on lower GC of early growth stages of wheat (Hu et al. 2019).


Fig. 5.  Distributions of reference ground coverages of synthetic UAV image sets (a) and real UAV image sets (b). The reference images of synthetic UAV image sets were captured in a wheat trial at different sampling dates. Real UAV image sets were captured over a field with natural-grown weeds (Set1) and three wheat trials (Set2, 3 and 4).
Click to zoom

Evaluation with synthetic UAV image sets

Vegetation proportions of individual pixels were estimated by the PPS and PPS-SPC methods, respectively. Vegetation proportion of each pixel was derived by the two methods from fine and coarse images with different spatial resolutions (0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5 and 4.0 cm in Fig. 6). Images with fine resolutions (e.g. 0.1 cm) provided clearer boundaries between the vegetation and non-vegetation areas. However, image segmentations gradually blurred for coarse-resolution images. With the per-pixel segmentation (i.e. PPS method), vegetation proportion in each pixel was 0 or 100%. The vegetation region gradually shrank with decreasing in spatial resolution at the lower GC (e.g. Fig. 6a) but increased at the higher GC (data not shown; Hu et al. 2019). Conversely, the PPS-SPC method created the continuous gradient of vegetation proportion (i.e. ranged from 0 to 100%) for each pixel, and the vegetation area gradually expanded but vegetation proportion of individual pixels decreased (lighter colour) as the decreasing spatial resolution (Fig. 6b).


Fig. 6.  Example of vegetation proportion estimated from synthetic UAV images with different spatial resolutions using the PPS method (a) and the proposed PPS-SPC method (b). Colour gradient represents vegetation proportion in each pixel.
Click to zoom

The GCs of degraded and their corresponding reference images in the SISs were compared at different spatial resolutions for the PPS and PPS-SPC methods (Fig. 7). For both methods, the accuracy of GC estimations was excellent at a spatial resolution of 0.1 cm and decreased as the spatial resolution coarsened to 4 cm, and the deviation between estimated and reference GC gradually increased. The PPS method overestimated the GCs of coarse-resolution images when reference GCs were greater than a cut-off point, and vice versa (Fig. 7a). The PPS method was observed with high R2 (i.e. R2 >0.95), but RMSE and RRMSE significantly increased (i.e. RMSE and RRMSE increased from 1.2% to 14.9% and from 2.1% to 25.9%, respectively, when spatial resolution increased from 0.1 cm to 4.0 cm; Fig. 8). Accordingly, the mean absolute error of GC estimations increased from 1.0% ± 0.6% to 13.1% ± 7.1% for the PPS method. The GCs of degraded images were slightly prone to be underestimated by the PPS-SPC method as reference GC increased (Fig. 7b). The PPS-SPC method obtained better performance of estimating GCs as the spatial resolution coarsened (i.e. R2 >0.97, RMSE and RRMSE increased from 1.1% to 6.4% and from 1.9% to 11.1%, respectively, when spatial resolution increased from 0.1 cm to 4.0 cm; Fig. 8). Meanwhile, the mean absolute error of GC estimations increased from 0.7% ± 0.9% to 4.0% ± 4.8%. Across all the spatial resolutions, the performance of PPS-SPC was relatively accurate (higher mean R2 and lower mean RRMSE) and stable (smaller s.d.), with R2 = 0.97 ± 0.02 and RRMSE = 16.1% ± 6.1% for the PPS method, and R2 = 0.98 ± 0.01 and RRMSE = 9.4% ± 3.4% for the PPS-SPC method (Fig. 8).


Fig. 7.  Comparison of reference and estimated ground coverages from the PPS (a) and the proposed PPS-SPC method (b) for the synthetic UAV image sets across various spatial resolutions. The colour gradient of hexagons represents the number of data points in a certain value range. Black dashed lines represent one-to-one lines.
Click to zoom


Fig. 8.  Accuracy comparison between ground coverage estimations from the PPS (red) and the PPS-SPC (blue) method for the synthetic UAV image sets across various spatial resolutions, with the R2 (a), RMSE (b) and RRMSE (c).
Click to zoom

Evaluation with real UAV image sets

The PPS and PPS-SPC methods were also evaluated on the vegetation proportion of individual pixels of the RISs. Examples of vegetation proportions of pixels derived by the two methods from images captured at different flight heights (i.e. 5, 10, 20, 25, 30, 40 and 50 m) are demonstrated for a wheat trial in Fig. 9. Images captured at a lower height (i.e. 5 m) provided clearer boundaries between the vegetation and non-vegetation areas. However, boundaries gradually blurred as the flight height increased. For the PPS method, the vegetation area significantly shrank with the increase in flight height, especially the image at 50 m (Fig. 9a). Conversely, the PPS-SPC method maintained a continuous gradient of vegetation proportion (i.e. ranged from 0 to 100%) among pixels, and the vegetation areas of images gradually expanded but vegetation proportion of individual pixels decreased as the increasing flight height (Fig. 9b).


Fig. 9.  Example of vegetation proportion estimated from real UAV images captured at different flight heights using PPS method (a) and the proposed PPS-SPC method (b). Colour gradient represents vegetation proportion in each pixel.
Click to zoom

The RISs were used to evaluate the two methods through comparison of the GC estimations of reference and corresponding coarse images captured at different flight heights (Fig. 10). For both methods, the GC estimations tend to be underestimated as flight height increased from 5 m to 100 m with the anomalies gradually increasing. Both methods were observed similar high accuracy of GC estimations at a lower flight height (e.g. 5 m: R2, RMSE and RRMSE were ~0.98, 1.2% and 9.3%, respectively; Fig. 11). The performance of the PPS method significantly declined with increasing flight heights (R2 decreased from 0.98 to 0.22, RMSE increased from 1.2% to 11.9% and RRMSE increased from 9.3% to 90.3%). Contrarily, the accuracy of the PPS-SPC method was decreased much less with increasing flight height (R2 decreased from 0.97 to 0.79, RMSE increased from 1.3% to 4.5% and RRMSE increased from 9.9% to 34.5%), with relatively good estimates for reference GC of less than ~20%. The performance of the PPS-SPC method was stable when flight height increased from 30 m to 100 m (i.e. R2 = 0.80 ± 0.031, RMSE = 4.0% ± 0.56% and RRMSE = 30.3% ± 4.3%).


Fig. 10.  Comparison of reference and estimated ground coverages from the PPS (red) and PPS-SPC (blue) method for the real UAV image sets (shapes) captured at different flight heights. Black dashed lines represent one-to-one lines.
Click to zoom


Fig. 11.  Accuracy comparison of ground coverage estimations from the PPS (red) and PPS-SPC (blue) method for the real UAV image sets captured at different flight heights. The accuracy was quantified by the R2 (a), RMSE (b) and RRMSE (c).
Click to zoom


Discussion

This study proposed a new method (i.e. PPS-SPC method) to estimate ground coverage (GC) from UAV remote sensing imagery and its performance was evaluated with the synthetic UAV image sets (SISs) and the real UAV image sets (RISs). The results showed that the PPS-SPC method obtained overall higher and stable performances in GC estimation of the SISs and RISs comparing with a per-pixel segmentation method (PPS; Guo et al. 2013) (Figs 7, 8, 10, 11).

Accuracy of GC estimation strongly depended on the spatial resolution of UAV imagery. Both methods obtained high accuracy (i.e. RRMSE <10%) at fine resolutions (i.e. resolution ≤0.5 cm in SISs and flight height at 5 m in RISs). However, the performances declined as decreasing spatial resolution in both the SISs and RISs (Figs 7, 8, 10, 11). A finer spatial resolution provides more spatial details, reduces the impact of mixed pixels, and then affects classification, especially when the size of the scene elements (e.g. wheat leaves that range from 4 mm to ~15 mm in width) is relatively smaller than the pixel size (Hsieh et al. 2001; Hengl 2006). In this study, wheat leaf edges became blurred and even undetectable on UAV imagery with higher flight heights (e.g. flight height >30 m in Fig. 2), which caused poor estimations of GC (Fig. 10). The effects of spatial resolution on GC estimation were evaluated in Hu et al. (2019) using synthetic UAV imagery, which concluded that a fine spatial resolution is required to accurately estimate GC and to distinguish genotypes with UAV surveys in plant breeding (e.g. <0.1 cm). Impacts of the spatial resolution were also reported for phenotyping other crop traits; e.g. canopy temperature (Jones and Sirault 2014; Deery et al. 2016), plant density (Jin et al. 2017) and height (Lu et al. 2019), aboveground biomass (Lu et al. 2019), crop disease (Mahlein 2016) and weed detection (Gebhardt and Kühbauch 2007; Peña et al. 2015).

Algorithms of image classification also affected the accuracy of GC estimation. The two methods (i.e. PPS and PPS-SPC) showed significant differences in the performance of GC estimation, especially for imagery with coarse resolutions (i.e. resolution >1.0 cm or flight height >20 m; Figs 7, 10). These performance differences were correlated with the nature of classifiers and the mixed pixel problem. As the spatial resolution coarsens, the colour information of a pixel may be mixed through containing the colour information of both the vegetative and non-vegetative class, such that the colour of this mixed pixel is correlated with percentages of the two classes (Lu and Weng 2007). Consequently, on coarse resolution images with low GCs, the vegetation objects are dissolved into non-vegetation objects and vice versa (Myint et al. 2011). The PPS method ignores the impact of mixed pixels, classifies each pixel into one class (i.e. either vegetation or none-vegetation), and then introduces inaccurate estimation of GC, especially for coarse-resolution imagery (Lu and Weng 2007; Myint et al. 2011; Hu et al. 2019). Contrarily, the PPS-SPC method decomposed the partial class memberships of components (i.e. vegetation and none-vegetation classes) in the mixed pixels and then extracted the continuous proportion (i.e. ranged from 0 to 100%) for each component (Figs 7, 9) (Drzewiecki 2016; Tsutsumida et al. 2016).

Plant breeding programs require phenotyping of numerous plots with adequate spatial and temporal resolution to characterise the differences of specific traits within a breeding population and their changes over time (Araus and Cairns 2014; Haghighattalab et al. 2016). With the PPS method, to accurately estimate GC through the growing season requires low flight heights (<10 m with the onboard camera in this study; Fig. 10), as its RRMSE values should be less than 30%, which was normally considered as fair performance in model evaluation (Jamieson et al. 1991). However, low flight heights are normally impractical for UAV surveys in plant breeding, whose normal heights ranging from 20 to 50 m in a practical sense. A camera with extreme high-resolution of sensors and/or lens with long focal lengths is a method to acquire fine resolution imagery with higher flights and thus to obtain accurate estimations. An alternative is to use an advanced classification method, such as ML and DL classifiers (Zhang et al. 2019; Bhatnagar et al. 2020b; Su et al. 2021) and spectral/colour unmixing analysis (Keshava and Mustard 2002; Yan et al. 2019), which could obtain accurate estimations of GC from relative higher flights without extra investment. The PPS-SPC method has shown that it could obtain a fair performance of GC estimations from higher flights (<50 m with the onboard camera in this study; Fig. 10). Some studies suggested that ML classifiers are more practical than DL classifiers in vegetation segmentation, although the later may outperform ML classifiers. This is due to that high-accurate DL classifiers normally require larger training datasets and computational capacity when compared with ML classifiers, and DL networks also need to be trained for each different site and growth stage (Ayhan et al. 2020; Bhatnagar et al 2020b). Besides, the fusion of diverse sources of imageries (e.g. combining visible and multispectral imagery) has the potential to improve the segmentation of vegetation and estimation of GC (Xie et al. 2008; Zhang et al. 2019; Daryaei et al. 2020).

Compared with field trials, applications in precision agriculture require screening of larger areas in each UAV survey with much higher flight heights, with the expense of the spatial resolution of images. For instance, to cover a 10 ha (200 m × 500 m) field using a UAV platform mounted with a high-resolution camera (e.g. 20-megapixel sensor: sensor size 13.2 mm × 8.8 mm, focal length 8.6 mm, image resolution 5472 × 3648 pixels), it takes ~150 min at a flight height of 20 m AGL with image spatial resolution 0.56 cm, but only a 10-min flight at a height of 100 m AGL with a spatial resolution of 2.8 cm. In precision agriculture, accurate estimation of GC is beneficial for monitoring crop growth status (e.g. crop germination and nitrogen status, Hunt et al. 2018; Jay et al. 2019; Li et al. 2010) and making proper decisions on determination of side-dress nitrogen rate (van Evert et al. 2012), irrigation (Sharma and Ritchie 2015) and weed control (Peña et al. 2013). Therefore, to balance the accuracy (or spatial resolution) requirement and UAV flight height is important for accurate estimations of phenotypic values when considering the investment of equipment and time in the larger-scale farm (Mahlein 2016; Hunt et al. 2018). The PPS-SPC method could facilitate the estimation of GC as it provided the accurate and stable performance of GC estimations (i.e. RRMSE = 30.3% ± 4.3%) over a broad range of flight heights (e.g. up to 100 m; Figs 10, 11), and will be further evaluated with a wider range of GCs in real-world.

As an ML classifier, the performance of the PPS-SPC method strongly depends on training data, which should cover the colour information for vegetation classification. The training datasets and images were visually selected and labelled in this study, which was time-consuming and subjective to the experience of operators. Further work should be devoted to improving the efficiency of acquisition of training datasets. In a practical sense, applying the PPS-SPC method requires high-resolution reference images for the acquisition of training dataset, which can be captured at a low flight height (e.g. 3 m or lower) over different parts of the whole scene to cover the different illumination conditions and GC levels, during take-off and/or landing with the same camera settings (e.g. white balance and focal length). The training images were selected for each different growth stage (i.e. dataset) in this study. It is possible to prepare a training dataset to train a generic model for all the growth stages by selecting training images that cover a range from 0% to ~100% of GC and various light conditions, which will decrease the time for training models for different stages and improve the applicability of the model. However, this kind of generic model will be further evaluated to see if it can obtain similar performance when compared with the model built for each growth stage. Alternatives to the classification and regression tree models used in this study, other ML models (e.g. random frost) will be evaluated for further improvement in the proposed method.


Conclusion

This study proposed a new method (i.e. PPS-SPC) coupling of classification and regression trees to estimate GC from UAV imagery. An assessment of the performance of PPS-SPC method was conducted using two image sets (i.e. SISs and RISs) with wide ranges of spatial resolutions, which demonstrated that the GC estimations by the PPS-SPC method agreed with corresponding reference values (the RRMSE was less than 10.9% for SISs and 34.5% for RISs). Particularly, the PPS-SPC method was more accurate and robust in GC estimations from UAV imagery with coarse resolutions (up to 4 cm) when compared with the PPS method. This improvement suggested that we could increase the spatial coverage of UAV per unit of time with a higher flight height while ensuring an acceptable accuracy of GC estimation. In summary, the proposed PPS-SPC method can be potentially applied in plant breeding and precision agriculture to balance accuracy requirement and UAV flight height in the limited battery life and operation time.


Conflicts of interest

The authors declare no conflicts of interest.



Acknowledgements

This work was supported by CSIRO and the field experiment was funded by the project of Grains Research and Development Corporation (Grant no. CSP00179). We thank Dr Christopher Nunn for the internal review of the manuscript.


References

Adams JE, Arkin GF (1977) A Light Interception Method for Measuring Row Crop Ground Cover. Soil Science Society of America Journal 41, 789–792.
A Light Interception Method for Measuring Row Crop Ground Cover.Crossref | GoogleScholarGoogle Scholar |

Araus JL, Cairns JE (2014) Field high-throughput phenotyping: the new crop breeding frontier. Trends in Plant Science 19, 52–61.
Field high-throughput phenotyping: the new crop breeding frontier.Crossref | GoogleScholarGoogle Scholar | 24139902PubMed |

Ashapure A, Jung J, Yeom J, Chang A, Maeda M, Maeda A, Landivar J (2019) A novel framework to detect conventional tillage and no-tillage cropping system effect on cotton growth and development using multi-temporal UAS data. ISPRS Journal of Photogrammetry and Remote Sensing 152, 49–64.
A novel framework to detect conventional tillage and no-tillage cropping system effect on cotton growth and development using multi-temporal UAS data.Crossref | GoogleScholarGoogle Scholar |

Ayhan B, Kwan C, Budavari B, Kwan L, Lu Y, Perez D, Li J, Skarlatos D, Vlachos M (2020) Vegetation Detection Using Deep Learning and Conventional Methods. Remote Sensing 12, 2502
Vegetation Detection Using Deep Learning and Conventional Methods.Crossref | GoogleScholarGoogle Scholar |

Baccini A, Laporte N, Goetz SJ, Sun M, Dong H (2008) A first map of tropical Africa’s above-ground biomass derived from satellite imagery. Environmental Research Letters 3, 045011
A first map of tropical Africa’s above-ground biomass derived from satellite imagery.Crossref | GoogleScholarGoogle Scholar |

Barthelme S (2019) ‘Imager: image processing library based on “Cimg”.’ https://CRAN.R-project.org/package=imager.

Bhatnagar S, Gill L, Ghosh B (2020) Drone Image Segmentation Using Machine and Deep Learning for Mapping Raised Bog Vegetation Communities. Remote Sensing 12, 2602
Drone Image Segmentation Using Machine and Deep Learning for Mapping Raised Bog Vegetation Communities.Crossref | GoogleScholarGoogle Scholar |

Bhatnagar S, Gill L, Regan S, Naughton O, Johnston P, Waldren S, Ghosh B (2020) Mapping vegetation communities inside wetlands using Sentinel-2 imagery in ireland. International Journal of Applied Earth Observation and Geoinformation 88, 102083
Mapping vegetation communities inside wetlands using Sentinel-2 imagery in ireland.Crossref | GoogleScholarGoogle Scholar |

Blaschke T (2010) Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing 65, 2–16.
Object based image analysis for remote sensing.Crossref | GoogleScholarGoogle Scholar |

Bojacá CR, García SJ, Schrevens E (2011) Analysis of Potato Canopy Coverage as Assessed Through Digital Imagery by Nonlinear Mixed Effects Models. Potato Research 54, 237–252.
Analysis of Potato Canopy Coverage as Assessed Through Digital Imagery by Nonlinear Mixed Effects Models.Crossref | GoogleScholarGoogle Scholar |

Breiman L, Friedman J, Stone CJ, Olshen RA (1984) ‘Classification and Regression Trees.’ (Wadsworth International Group)

Campilho A, Garcia B, Toorn HVD, Wijk HV, Campilho A, Scheres B (2006) Time-lapse analysis of stem-cell divisions in the Arabidopsis thaliana root meristem. The Plant Journal 48, 619–627.
Time-lapse analysis of stem-cell divisions in the Arabidopsis thaliana root meristem.Crossref | GoogleScholarGoogle Scholar | 17087761PubMed |

Campillo C, Prieto MH, Daza C, Moñino MJ, García MI (2008) Using Digital Images to Characterize Canopy Coverage and Light Interception in a Processing Tomato Crop. HortScience 43, 1780–1786.
Using Digital Images to Characterize Canopy Coverage and Light Interception in a Processing Tomato Crop.Crossref | GoogleScholarGoogle Scholar |

Chapman SC, Merz T, Chan A, Jackway P, Hrabar S, Dreccer MF, Holland E, Zheng B, Ling TJ, Jimenez-Berni J (2014) Pheno-Copter: a low-altitude, autonomous remote-sensing robotic helicopter for high-throughput field-based phenotyping. Agronomy (Basel) 4, 279–301.
Pheno-Copter: a low-altitude, autonomous remote-sensing robotic helicopter for high-throughput field-based phenotyping.Crossref | GoogleScholarGoogle Scholar |

Daryaei A, Sohrabi H, Atzberger C, Immitzer M (2020) Fine-scale detection of vegetation in semi-arid mountainous areas with focus on riparian landscapes using Sentinel-2 and UAV data. Computers and Electronics in Agriculture 177, 105686
Fine-scale detection of vegetation in semi-arid mountainous areas with focus on riparian landscapes using Sentinel-2 and UAV data.Crossref | GoogleScholarGoogle Scholar |

Deery DM, Rebetzke GJ, Jimenez-Berni JA, James RA, Condon AG, Bovill WD, Hutchinson P, Scarrow J, Davy R, Furbank RT (2016) Methodology for high-throughput field phenotyping of canopy temperature using airborne thermography. Frontiers in Plant Science 7, 1808
Methodology for high-throughput field phenotyping of canopy temperature using airborne thermography.Crossref | GoogleScholarGoogle Scholar | 27999580PubMed |

Drzewiecki W (2016) Comparison of selected machine learning algorithms for sub-pixel imperviousness change assessment. In ‘2016 Baltic Geodetic Congress (BGC Geomatics)’, Gdansk, Poland. pp. 67–72. (IEEE: Gdansk, Poland)

Duan T, Zheng B, Guo W, Ninomiya S, Guo Y, Chapman SC (2017) Comparison of ground cover estimates from experiment plots in cotton, sorghum and sugarcane based on images and ortho-mosaics captured by UAV. Functional Plant Biology 44, 169–183.
Comparison of ground cover estimates from experiment plots in cotton, sorghum and sugarcane based on images and ortho-mosaics captured by UAV.Crossref | GoogleScholarGoogle Scholar |

Gago J, Douthe C, Coopman RE, Gallego PP, Ribas-Carbo M, Flexas J, Escalona J, Medrano H (2015) UAVs challenge to assess water stress for sustainable agriculture. Agricultural Water Management 153, 9–19.
UAVs challenge to assess water stress for sustainable agriculture.Crossref | GoogleScholarGoogle Scholar |

Gebhardt S, Kühbauch W (2007) A new algorithm for automatic Rumex obtusifolius detection in digital images using colour and texture features and the influence of image resolution. Precision Agriculture 8, 1–13.
A new algorithm for automatic Rumex obtusifolius detection in digital images using colour and texture features and the influence of image resolution.Crossref | GoogleScholarGoogle Scholar |

Gonias ED, Oosterhuis DM, Bibi AC, Purcell LC (2012) Estimating light interception by cotton using a digital imaging technique. American Journal of Experimental Agriculture 2, 1–8.
Estimating light interception by cotton using a digital imaging technique.Crossref | GoogleScholarGoogle Scholar |

Großkinsky DK, Svensgaard J, Christensen S, Roitsch T (2015) Plant phenomics and the need for physiological phenotyping across scales to narrow the genotype-to-phenotype knowledge gap. Journal of Experimental Botany 66, 5429–5440.
Plant phenomics and the need for physiological phenotyping across scales to narrow the genotype-to-phenotype knowledge gap.Crossref | GoogleScholarGoogle Scholar | 26163702PubMed |

Gu D, Zhen F, Hannaway DB, Zhu Y, Liu L, Cao W, Tang L (2017) Quantitative classification of rice (Oryza sativa l.) root length and diameter using image analysis. PLoS One 12, e0169968
Quantitative classification of rice (Oryza sativa l.) root length and diameter using image analysis.Crossref | GoogleScholarGoogle Scholar | 28961259PubMed |

Guo W, Rage UK, Ninomiya S (2013) Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Computers and Electronics in Agriculture 96, 58–66.
Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model.Crossref | GoogleScholarGoogle Scholar |

Guo W, Zheng B, Duan T, Fukatsu T, Chapman S, Ninomiya S (2017) EasyPCC: benchmark datasets and tools for high-throughput measurement of the plant canopy coverage ratio under field conditions. Sensors 17, 798
EasyPCC: benchmark datasets and tools for high-throughput measurement of the plant canopy coverage ratio under field conditions.Crossref | GoogleScholarGoogle Scholar |

Haghighattalab A, González Pérez L, Mondal S, Singh D, Schinstock D, Rutkoski J, Ortiz-Monasterio I, Singh RP, Goodin D, Poland J (2016) Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries. Plant Methods 12, 35
Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries.Crossref | GoogleScholarGoogle Scholar | 27347001PubMed |

Hansen MC, DeFries RS, Townshend JRG, Sohlberg R, Dimiceli C, Carroll M (2002) Towards an operational MODIS continuous field of percent tree cover algorithm: examples using AVHRR and MODIS data. Remote Sensing of Environment 83, 303–319.
Towards an operational MODIS continuous field of percent tree cover algorithm: examples using AVHRR and MODIS data.Crossref | GoogleScholarGoogle Scholar |

Hengl T (2006) Finding the right pixel size. Computers & Geosciences 32, 1283–1298.
Finding the right pixel size.Crossref | GoogleScholarGoogle Scholar |

Hsieh P-F, Lee LC, Chen N-Y (2001) Effect of spatial resolution on classification errors of pure and mixed pixels in remote sensing. IEEE Transactions on Geoscience and Remote Sensing 39, 2657–2663.
Effect of spatial resolution on classification errors of pure and mixed pixels in remote sensing.Crossref | GoogleScholarGoogle Scholar |

Hu P, Chapman SC, Wang X, Potgieter A, Duan T, Jordan D, Guo Y, Zheng B (2018) Estimation of plant height using a high throughput phenotyping platform based on unmanned aerial vehicle and self-calibration: example for sorghum breeding. European Journal of Agronomy 95, 24–32.
Estimation of plant height using a high throughput phenotyping platform based on unmanned aerial vehicle and self-calibration: example for sorghum breeding.Crossref | GoogleScholarGoogle Scholar |

Hu P, Guo W, Chapman SC, Guo Y, Zheng B (2019) Pixel size of aerial imagery constrains the applications of unmanned aerial vehicle in crop breeding. ISPRS Journal of Photogrammetry and Remote Sensing 154, 1–9.
Pixel size of aerial imagery constrains the applications of unmanned aerial vehicle in crop breeding.Crossref | GoogleScholarGoogle Scholar |

Hunt ER, Horneck DA, Spinelli CB, Turner RW, Bruce AE, Gadler DJ, Brungardt JJ, Hamm PB (2018) Monitoring nitrogen status of potatoes using small unmanned aerial vehicles. Precision Agriculture 19, 314–333.
Monitoring nitrogen status of potatoes using small unmanned aerial vehicles.Crossref | GoogleScholarGoogle Scholar |

Jamieson PD, Porter JR, Wilson DR (1991) A test of the computer simulation model ARCWHEAT1 on wheat crops grown in New Zealand. Field Crops Research 27, 337–350.
A test of the computer simulation model ARCWHEAT1 on wheat crops grown in New Zealand.Crossref | GoogleScholarGoogle Scholar |

Jay S, Baret F, Dutartre D, Malatesta G, Héno S, Comar A, Weiss M, Maupas F (2019) Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sensing of Environment 231, 110898
Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops.Crossref | GoogleScholarGoogle Scholar |

Jin X, Liu S, Baret F, Hemerlé M, Comar A (2017) Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sensing of Environment 198, 105–114.
Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery.Crossref | GoogleScholarGoogle Scholar |

Jones HG, Sirault XRR (2014) Scaling of thermal images at different spatial resolution: the mixed pixel problem. Agronomy (Basel) 4, 380–396.
Scaling of thermal images at different spatial resolution: the mixed pixel problem.Crossref | GoogleScholarGoogle Scholar |

Keshava N, Mustard JF (2002) Spectral unmixing. IEEE Signal Processing Magazine 19, 44–57.
Spectral unmixing.Crossref | GoogleScholarGoogle Scholar |

Kipp S, Mistele B, Baresel P, Schmidhalter U (2014) High-throughput phenotyping early plant vigour of winter wheat. European Journal of Agronomy 52, 271–278.
High-throughput phenotyping early plant vigour of winter wheat.Crossref | GoogleScholarGoogle Scholar |

Laliberte AS, Rango A, Herrick JE, Fredrickson EL, Burkett L (2007) An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography. Journal of Arid Environments 69, 1–14.
An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography.Crossref | GoogleScholarGoogle Scholar |

Lati RN, Filin S, Eizenberg H (2011) Robust Methods for Measurement of Leaf-Cover Area and Biomass from Image Data. Weed Science 59, 276–284.
Robust Methods for Measurement of Leaf-Cover Area and Biomass from Image Data.Crossref | GoogleScholarGoogle Scholar |

Lee K-J, Lee B-W (2013) Estimation of rice growth and nitrogen nutrition status using color digital camera image analysis. European Journal of Agronomy 48, 57–65.
Estimation of rice growth and nitrogen nutrition status using color digital camera image analysis.Crossref | GoogleScholarGoogle Scholar |

Li Y, Chen D, Walker CN, Angus JF (2010) Estimating the nitrogen status of crops using a digital camera. Field Crops Research 118, 221–227.
Estimating the nitrogen status of crops using a digital camera.Crossref | GoogleScholarGoogle Scholar |

Liebisch F, Kirchgessner N, Schneider D, Walter A, Hund A (2015) Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach. Plant Methods 11, 9
Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach.Crossref | GoogleScholarGoogle Scholar | 25793008PubMed |

Lu D, Weng Q (2007) A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing 28, 823–870.
A survey of image classification methods and techniques for improving classification performance.Crossref | GoogleScholarGoogle Scholar |

Lu N, Zhou J, Han Z, Li D, Cao Q, Yao X, Tian Y, Zhu Y, Cao W, Cheng T (2019) Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system. Plant Methods 15, 17
Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system.Crossref | GoogleScholarGoogle Scholar | 30828356PubMed |

Mahlein A-K (2016) Plant disease detection by imaging sensors – parallels and specific demands for precision agriculture and plant phenotyping. Plant Disease 100, 241–251.
Plant disease detection by imaging sensors – parallels and specific demands for precision agriculture and plant phenotyping.Crossref | GoogleScholarGoogle Scholar | 30694129PubMed |

Martínez J, Egea G, Agüera J, Pérez-Ruiz M (2017) A cost-effective canopy temperature measurement system for precision agriculture: a case study on sugar beet. Precision Agriculture 18, 95–110.
A cost-effective canopy temperature measurement system for precision agriculture: a case study on sugar beet.Crossref | GoogleScholarGoogle Scholar |

Mullan DJ, Reynolds MP (2010) Quantifying genetic effects of ground cover on soil water evaporation using digital imaging. Functional Plant Biology 37, 703–712.
Quantifying genetic effects of ground cover on soil water evaporation using digital imaging.Crossref | GoogleScholarGoogle Scholar |

Myint SW, Gober P, Brazel A, Grossman-Clarke S, Weng Q (2011) Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sensing of Environment 115, 1145–1161.
Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery.Crossref | GoogleScholarGoogle Scholar |

Nielsen DC, Miceli-Garcia JJ, Lyon DJ (2012) Canopy Cover and Leaf Area Index Relationships for Wheat, Triticale, and Corn. Agronomy Journal 104, 1569–1573.
Canopy Cover and Leaf Area Index Relationships for Wheat, Triticale, and Corn.Crossref | GoogleScholarGoogle Scholar |

Pan G, Li F, Sun G (2007) Digital camera based measurement of crop cover for wheat yield prediction. In ‘2007 IEEE International Geoscience and Remote Sensing Symposium’, Barcelona, Spain. pp. 797–800. (IEEE: Barcelona, Spain)

Pauli D, Chapman SC, Bart R, Topp CN, Lawrence-Dill CJ, Poland J, Gore MA (2016) The Quest for Understanding Phenotypic Variation via Integrated Approaches in the Field Environment. Plant Physiology 172, 622–634.
The Quest for Understanding Phenotypic Variation via Integrated Approaches in the Field Environment.Crossref | GoogleScholarGoogle Scholar | 27482076PubMed |

Peña JM, Torres-Sánchez J, de Castro AI, Kelly M, López-Granados F (2013) Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images. PLoS One 8, e77151
Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images.Crossref | GoogleScholarGoogle Scholar | 24146963PubMed |

Peña JM, Torres-Sánchez J, Serrano-Pérez A, de Castro AI, López-Granados F (2015) Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution. Sensors 15, 5609–5626.
Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution.Crossref | GoogleScholarGoogle Scholar | 25756867PubMed |

Prieto I, Stokes A, Roumet C (2016) Root functional parameters predict fine root decomposability at the community level. Journal of Ecology 104, 725–733.
Root functional parameters predict fine root decomposability at the community level.Crossref | GoogleScholarGoogle Scholar |

Purcell LC (2000) Soybean canopy coverage and light interception measurements using digital imagery. Crop Science 40, 834–837.
Soybean canopy coverage and light interception measurements using digital imagery.Crossref | GoogleScholarGoogle Scholar |

R Core Team (2019) ‘R: A language and environment for statistical computing.’ (R Foundation for Statistical Computing: Vienna, Austria) https://www.R-project.org/.

Ranđelović P, Đorđević V, Milić S, Balešević-Tubić S, Petrović K, Miladinović J, Đukić V (2020) Prediction of Soybean Plant Density Using a Machine Learning Model and Vegetation Indices Extracted from RGB Images Taken with a UAV. Agronomy (Basel) 10, 1108
Prediction of Soybean Plant Density Using a Machine Learning Model and Vegetation Indices Extracted from RGB Images Taken with a UAV.Crossref | GoogleScholarGoogle Scholar |

Sankaran S, Khot LR, Espinoza CZ, Jarolmasjed S, Sathuvalli VR, Vandemark GJ, Miklas PN, Carter AH, Pumphrey MO, Knowles NR, Pavek MJ (2015) Low-altitude, high-resolution aerial imaging systems for row and field crop phenotyping: A review. European Journal of Agronomy 70, 112–123.
Low-altitude, high-resolution aerial imaging systems for row and field crop phenotyping: A review.Crossref | GoogleScholarGoogle Scholar |

Sharma B, Ritchie GL (2015) High-throughput phenotyping of cotton in multiple irrigation environments. Crop Science 55, 958–969.
High-throughput phenotyping of cotton in multiple irrigation environments.Crossref | GoogleScholarGoogle Scholar |

Shi Y, Thomasson JA, Murray SC, Pugh NA, Rooney WL, Shafian S, Rajan N, Rouze G, Morgan CLS, Neely HL, Rana A, Bagavathiannan MV, Henrickson J, Bowden E, Valasek J, Olsenholler J, Bishop MP, Sheridan R, Putman EB, Popescu S, Burks T, Cope D, Ibrahim A, McCutchen BF, Baltensperger DD, Jr RVA, Vidrine M, Yang C (2016) Unmanned Aerial Vehicles for High-Throughput Phenotyping and Agronomic Research. PLoS One 11, e0159781
Unmanned Aerial Vehicles for High-Throughput Phenotyping and Agronomic Research.Crossref | GoogleScholarGoogle Scholar | 28033334PubMed |

Su J, Yi D, Su B, Mi Z, Liu C, Hu X, Xu X, Guo L, Chen W-H (2021) Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring. IEEE Transactions on Industrial Informatics 17, 2242–2249.
Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring.Crossref | GoogleScholarGoogle Scholar |

Suzuki T, Ohta T, Izumi Y, Kanyomeka L, Mwandemele O, Sakagami J-I, Yamane K, Iijima M (2013) Role of Canopy Coverage in Water Use Efficiency of Lowland Rice in Early Growth Period in Semi-Arid Region. Plant Production Science 16, 12–23.
Role of Canopy Coverage in Water Use Efficiency of Lowland Rice in Early Growth Period in Semi-Arid Region.Crossref | GoogleScholarGoogle Scholar |

Torres-Sánchez J, Peña JM, de Castro AI, López-Granados F (2014) Multi-temporal mapping of the vegetation fraction in early-season wheat fields using images from UAV. Computers and Electronics in Agriculture 103, 104–113.
Multi-temporal mapping of the vegetation fraction in early-season wheat fields using images from UAV.Crossref | GoogleScholarGoogle Scholar |

Torres-Sánchez J, López-Granados F, Peña JM (2015) An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Computers and Electronics in Agriculture 114, 43–52.
An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops.Crossref | GoogleScholarGoogle Scholar |

Tsutsumida N, Comber A, Barrett K, Saizen I, Rustiadi E (2016) Sub-pixel classification of MODIS EVI for annual mappings of impervious surface areas. Remote Sensing 8, 143
Sub-pixel classification of MODIS EVI for annual mappings of impervious surface areas.Crossref | GoogleScholarGoogle Scholar |

van Evert FK, Booij R, Jukema JN, ten Berge HFM, Uenk D, Meurs EJJ, van Geel WCA, Wijnholds KH, Slabbekoorn JJ (2012) Using crop reflectance to determine sidedress N rate in potato saves N and maintains yield. European Journal of Agronomy 43, 58–67.
Using crop reflectance to determine sidedress N rate in potato saves N and maintains yield.Crossref | GoogleScholarGoogle Scholar |

Waldner F, Defourny P (2017) Where can pixel counting area estimates meet user-defined accuracy requirements? International Journal of Applied Earth Observation and Geoinformation 60, 1–10.
Where can pixel counting area estimates meet user-defined accuracy requirements?Crossref | GoogleScholarGoogle Scholar |

Walter A, Liebisch F, Hund A (2015) Plant phenotyping: from bean weighing to image analysis. Plant Methods 11, 14
Plant phenotyping: from bean weighing to image analysis.Crossref | GoogleScholarGoogle Scholar | 25767559PubMed |

Xie Y, Sha Z, Yu M (2008) Remote sensing imagery in vegetation mapping: a review. Journal of Plant Ecology 1, 9–23.
Remote sensing imagery in vegetation mapping: a review.Crossref | GoogleScholarGoogle Scholar |

Xu M, Watanachaturaporn P, Varshney PK, Arora MK (2005) Decision tree regression for soft classification of remote sensing data. Remote Sensing of Environment 97, 322–336.
Decision tree regression for soft classification of remote sensing data.Crossref | GoogleScholarGoogle Scholar |

Yan G, Li L, Coy A, Mu X, Chen S, Xie D, Zhang W, Shen Q, Zhou H (2019) Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing. ISPRS Journal of Photogrammetry and Remote Sensing 158, 23–34.
Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing.Crossref | GoogleScholarGoogle Scholar |

Yang G, Liu J, Zhao C, Li Z, Huang Y, Yu H, Xu B, Yang X, Zhu D, Zhang X, Zhang R, Feng H, Zhao X, Li Z, Li H, Yang H (2017) Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Frontiers in Plant Science 8, 1111
Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives.Crossref | GoogleScholarGoogle Scholar | 28713402PubMed |

Yang M-D, Tseng H-H, Hsu Y-C, Tsai HP (2020) Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing 12, 633
Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images.Crossref | GoogleScholarGoogle Scholar |

Yu Q, Gong P, Clinton N, Biging G, Kelly M, Schirokauer D (2006) Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogrammetric Engineering and Remote Sensing 72, 799–811.
Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery.Crossref | GoogleScholarGoogle Scholar |

Zhang T, Su J, Liu C, Chen W-H (2019) Bayesian calibration of AquaCrop model for winter wheat by assimilating UAV multi-spectral images. Computers and Electronics in Agriculture 167, 105052
Bayesian calibration of AquaCrop model for winter wheat by assimilating UAV multi-spectral images.Crossref | GoogleScholarGoogle Scholar |