Scientia Agricultura Sinica ›› 2018, Vol. 51 ›› Issue (19): 3673-3682.doi: 10.3864/j.issn.0578-1752.2018.19.005

• TILLAGE & CULTIVATION·PHYSIOLOGY & BIOCHEMISTRY·AGRICULTURE INFORMATION TECHNOLOGY • Previous Articles     Next Articles

Real-Time Pixel-Wise Classification of Agricultural Images Based on Depth-Wise Separable Convolution

LIU QingFei, ZHANG HongLi, WANG YanLing   

  1. School of Electrical Engineering, Xinjiang University, Urumqi 830047
  • Received:2018-04-11 Online:2018-10-01 Published:2018-10-01

Abstract: 【Objective】In order to improve the accuracy and real time recognition of crops and weeds, the field color image of seedling beet was taken as the research object, and a pixel-wise classification method based on deep separable convolution was proposed.【Method】In this paper, the field color image of the seedling beet was used, the pixels in the color image were tagged into three categories of crops, weeds and soil by the manual pixel marking method, and the single classification information was placed in three different image channels, which was used for training and testing. First, a deep separable convolution neural network model based on encoder and decoder was set up. The encoder part and decoder part were merged in multi scale. The pixel location was determined by the encoder part, and the decoder part got the pixel classification. In order to solve the problem of the unbalance of the coverage rate of the classification category, the single channel standard was used. In order to control the size of the network parameters, the number of the point convolution kernel was controlled by the width multiplier and the network was used under the different resolution input conditions to control the network parameters. The model was further tested to discuss the real-time performance of the network model. Finally, we used random data enhancement technology to expand data sets, 80% of the data sets were used for training network parameters, and 20% of them were used to test network performance. 【Result】 (1) Compared with the existing pixel-wise classification method, the proposed method achieved higher classification accuracy. The average accuracy rate of the SegNet method was 90.06%, the average accuracy of the U-Net method was 92.06%, the average accuracy rate of the three channel marking training was 92.70%, and the average network accuracy of the single channel marking training was 94.99%. (2)The advantages of the single channel annotation information training method in dealing with the unbalance of classified category coverage and the less training samples were demonstrated by calculating the indexes of the single category by pixel classification by different methods. The accuracy rate of weeds pixel-wise classification SegNet method was 18.39%, U-Net method was 18.33%, the network of three channel marking training was 22.87%, and the network accuracy of single channel marking training was 41.94%. (3) The parameter size of the network model could be effectively controlled by the width multiplier. When the width multiplier was 1, the parameter size was 6.768 million, and the parameter size was reduced to 77.2 thousand when the width multiplier was 0.1. It was 1.14% of the original network parameter scale, and the accuracy rate for the classification of soil, weeds and crops was only 2.81%, 2.78% and 3.7%, respectively. According to the accuracy requirement, the scale of parameters could be further reduced. (4) Under the combined action of input resolution and width multiplier, the real-time processing capability of the network was discussed. Using GPU hardware acceleration, the rate of simultaneous recognition of three classes could reach 20 fps, and the rate of single class recognition was 60 fps. It could satisfy the real-time operation of agricultural weeding system and crop monitoring system. 【Conclusion】The pixel-wise classification method based on deep separable convolution proposed in this paper could effectively classify the soil, weeds and crops in agricultural images. At the same time, this method could deal with a single category by pixel classification in real time to meet the needs of the actual system.

Key words: crop and weed recognition, deep learning, convolutional neural networks, pixel-wise classification, semantic segmentation

[1]    齐月, 李俊生, 闫冰, 邓贞贞, 付刚. 化学除草剂对农田生态系统野生植物多样性的影响. 生物多样性, 2016, 24(2): 228-236.
Qi Y, Li J S, Yan B, Deng Z Z, Fu G. Impact of herbicides on wild plant diversity in agro-ecosystems: A review. Biodiversity Science, 2016, 24(2): 228-236. (in Chinese)
[2]    张小龙, 谢正春, 张念生, 曹成茂. 豌豆苗期田间杂草识别与变量喷洒控制系统. 农业机械学报, 2012, 43(11): 220-225, 73.
ZHANG X L, XIE Z C, ZHANG N S, CAO C M. Weed recognition from pea seedling images and variable spraying control system. Transactions of the Chinese Society for Agricultural Machinery, 2012, 43(11): 220-225, 73. (in Chinese)
[3]    徐艳蕾, 包佳林, 付大平, 朱炽阳. 多喷头组合变量喷药系统的设计与试验. 农业工程学报, 2016, 32(17): 47-54.
XU Y L, BAO J L, FU D P, ZHU Z Y. Design and experiment of variable spraying system based on multiple combined nozzles. Transactions of the Chinese Society of Agricultural Engineering, 2016, 32(17): 47-54. (in Chinese)
[4]    魏全全, 李岚涛, 任涛, 王振, 王少华, 李小坤, 丛日环, 鲁剑巍. 基于数字图像技术的冬油菜氮素营养诊断. 中国农业科学, 2015, 48(19): 3877-3886.
WEI Q Q, LI L T, REN T, WANG Z, WANG S H, LI X K, CONG R H, LU J W. Diagnosing nitrogen nutrition status of winter rapeseed via digital image processing technique. Scientia Agricultura Sinica, 2015, 48(19): 3877-3886. (in Chinese)
[5]    刘涛, 仲晓春, 孙成明, 郭文善, 陈瑛瑛, 孙娟. 基于计算机视觉的水稻叶部病害识别研究. 中国农业科学, 2014, 47(4): 664-674.
LIU T, ZHONG X C, SUN C M, GUO W S, CHEN Y Y, SUN J. Recognition of rice leaf diseases based on computer vision. Scientia Agricultura Sinica, 2014, 47(4): 664-674. (in Chinese)
[6]    唐俊, 邓立苗, 陈辉, 栾涛, 马文杰. 基于机器视觉的玉米叶片透射图像特征识别研究. 中国农业科学, 2014, 47(3): 431-440.
TANG J, Deng L M, CHEN H, LUAN T, MA W J. Research on maize leaf recognition of characteristics from transmission image based on machine vision. Scientia Agricultura Sinica, 2014, 47(3): 431-440. (in Chinese)
[7]    孟庆宽, 何洁, 仇瑞承, 马晓丹, 司永胜, 张漫, 刘刚. 基于机器视觉的自然环境下作物行识别与导航线提取. 光学学报, 2014, 34(7): 172-178.
MENG Q K, HE J, QIU R C, MA X D, SI Y S, ZHANG M, LIU G. Crop recognition and navigation line detection in natural environment based on machine vision. Acta Optic Sinica, 2014, 34(7): 172-178. (in Chinese)
[8]    刘哲, 李智晓, 张延宽, 张超, 黄健熙, 朱德海. 基于时序EVI决策树分类与高分纹理的制种玉米识别. 农业机械学报, 2015, 46(10): 321-327.
LIU Z, LI Z X, ZHANG Y K, ZHANG C, HUANG J X, ZHU D H. Seed maize identification based on time-series EVI decision tree classification and high resolution remote sensing texture analysis. Transactions of the Chinese Society for Agricultural Machinery, 2015, 46(10): 321-327. (in Chinese)
[9]    翟志强, 朱忠祥, 杜岳峰, 张硕, 毛恩荣. 基于Census变换的双目视觉作物行识别方法. 农业工程学报, 2016, 32(11): 205-213.
ZHAI Z Q, ZHU Z X, DU Y F, ZHANG S, MAO E R. Method for detecting crop rows based on binocular vision with Census transformation. Transactions of the Chinese Society of Agricultural Engineering, 2016, 32(11): 205-213. (in Chinese)
[10]   王璨, 李志伟. 利用融合高度与单目图像特征的支持向量机模型识别杂草. 农业工程学报, 2016, 32(15): 165-174.
WANG C, LI Z W. Weed recognition using SVM model with fusion height and monocular image features. Transactions of the Chinese Society of Agricultural Engineering, 2016, 32(15): 165-174. (in Chinese)
[11]   陈亚军, 赵博, 李树, 刘磊, 苑严伟, 张延立. 基于多特征的杂草逆向定位方法与试验. 农业机械学报, 2015, 46(6): 257-262.
CHEN Y J, ZHAO B, LI S J, LIU L, YUAN Y W, ZHANG Y L. Weed reverse positioning method and experiment based on multi-feature. Transactions of the Chinese Society for Agricultural Machinery, 2015, 46(6): 257-262. (in Chinese)
[12]   赵川源, 何东健, 乔永亮. 基于多光谱图像和数据挖掘的多特征杂草识别方法. 农业工程学报, 2013, 29(2): 192-198.
ZHAO C Y, HE D J, QIAO Y L. Identification method of multi-feature weed based on multi-spectral images and data mining. Transactions of the Chinese Society of Agricultural Engineering, 2013, 29(2): 192-198. (in Chinese)
[13]   王璨, 武新慧, 李志伟. 基于卷积神经网络提取多尺度分层特征识别玉米杂草. 农业工程学报, 2018, 34(5): 144-151.
WANG C, WU X H, LI Z W. Recognition of maize and weed based on multi-scale hierarchical features extracted by convolutional neural network. Transactions of the Chinese Society of Agricultural Engineering, 2018, 34(5): 144-151. (in Chinese)
[14]   Mccool C S, Perez T, Upcroft B. Mixtures of lightweight deep convolutional neural networks: applied to agricultural robotics. IEEE Robotics & Automation Letters, 2017,2(3): 1344-1351.
[15]   Haug S, Michaels A, Biber P, Ostermann J. Plant classification system for crop/weed discrimination without segmentation//IEEE Winter Conference on Applications of Computer Vision. IEEE, 2014: 1142-1149.
[16]   Potena C, Nardi D, Pretto A. Fast and accurate crop and weed identification with summarized train sets for precision agriculture// International Conference on Intelligent Autonomous Systems. Springer, 2016: 105-121.
[17]   Milioto A, Lottes P, Stachniss C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs // IEEE International Conference on Robotics and Automation. IEEE, 2018: 1-6.
[18]   Chebrolu N, Lottes P, Schaefer A, Winterhalter W Burgard W. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. International Journal of Robotics Research, 2017, 36(10): 1045-1052.
[19]   周飞燕, 金林鹏, 董军. 卷积神经网络研究综述. 计算机学报, 2017, 40(6): 1229-1251.
ZHOU F Y, JIN L P, DONG J. Review of convolutional neural network. Chinese Journal of Computers, 2017, 40(6): 1229-1251. (in Chinese)
[20]   Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation // IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2015: 3431-3440.
[21]   Garcia-Garcia A, Orts-Escolano S, Oprea S, VILLENA- MARTINEZ V, GARCIA-RODRIGUEZ J. A review on deep learning techniques applied to semantic segmentation. (2017-4-22)[2018-09- 26]. https://arxiv.org/abs/1704.06857.
[22]   Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation // International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015: 234-241.
[23]   Howard A G, Zhu M, Chen B, KALENICHENKO D, WANG W J, WEYAND T, ANDREETTO M, ADAM H. MobileNets: Efficient convolutional neural networks for mobile vision applications. (2017-04-17) [2018-09-26]. https://arxiv.org/abs/1704.04861.
[24]   Ioffe S, Szegedy C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift // Proceedings of the 32nd International Conference on Machine Learning. IMLS, 2015: 448-456.
[25]   Chollet F. Xception: Deep learning with depthwise separable convolutions // IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2017: 1800-1807.
[26]   Zeiler M D, Taylor G W, Fergus R. Adaptive deconvolutional networks for mid and high level feature learning // International Conference on Computer Vision. IEEE Computer Society, 2011: 2018-2025.
[27]   Chollet F.Keras. GitHub repository. (2017-03-15) [2018-9-26]. https://github.com/fchollet/keras.
[28]   Badrinarayanan V, Kendall A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for scene segmentation.(2016-10-10)[2018-09-26]. https://arxiv.org/abs/1511.00561.
[1] SUN Qing,ZHAO YanXia,CHENG JinXin,ZENG TingYu,ZHANG Yi. Fruit Growth Modelling Based on Multi-Methods - A Case Study of Apple in Zhaotong, Yunnan [J]. Scientia Agricultura Sinica, 2021, 54(17): 3737-3751.
[2] SHAO ZeZhong,YAO Qing,TANG Jian,LI HanQiong,YANG BaoJun,LÜ Jun,CHEN Yi. Research and Development of the Intelligent Identification System of Agricultural Pests for Mobile Terminals [J]. Scientia Agricultura Sinica, 2020, 53(16): 3257-3268.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!