Abstract:
In agriculture diseases, living and non-living creatures account for about 22% of crop production loss. For farmers, it’s
crucial to identify these pressures in their early stages using only their eyes. Early disease patterns and clusters can be identified using
computer vision technologies. But in recent years, image processing-based deep learning technology has shown useful for identifying
stress in Maize plant leaves. This work has used Primary and secondary datasets. The Plant Village dataset is compiled in this study for
the segmentation of object detection. Furthermore, the data set included a total of 100 pictures for Common Rust, 50 for Southern Rust,
70 for Gray Leaf Spot, 30 for MLB, and the final 30 for Turcicum leaf blight diseases. The 90 images were all taken of healthy leaves.
The model has been trained using the labelled, improved, and supplemented data. The maize plant’s sick objects have been divided
up using the P-CNN (PSPNet + CNN) model that has been suggested. The PSPNet and Basic CNN model are used for used together
within semantic segmentation to improve object detection. In terms of Recall, Precision, Intersection over Union (IoU), Accuracy,
and Mean Intersection Over Union (mIoU), the suggested YOLO+CNN and VGG16+CNN models outputs are contrasted based on
mIoU parameters. The suggested model performed 14803 images, and image processing operations in 30ns, which is faster than other
comparable models. The proposed model (P-CNN) has achieved an accuracy of 99.85% which is significantly higher than that of other
modified segmentation methods. The single and multiple-leaf diseases have been detected for identification and classification in this
work using the semantic segmentation data.