Jenn Hoskins
                            9th September, 2025
				 
Key Findings
- Researchers developed a new fruit detector, YOLOcF, and a corresponding dataset, CFruit, to improve fruit detection in real-world agricultural settings
- YOLOcF achieved comparable accuracy to state-of-the-art models, with slightly lower mAP than YOLOv9 but significantly faster processing speed at 323 fps
- YOLOcF is lightweight, requiring less computational power than most other models, making it suitable for use on mobile devices and enabling faster training times
The core problem the study tackles is the need for a fruit detection system that performs well in real-world agricultural settings, while also being practical for deployment on devices with limited resources, such as mobile phones. Existing fruit detection models often struggle with the complexities of natural environments, requiring significant computational power and time for training and operation.
To address this, the team constructed the CFruit image dataset, providing a large and diverse collection of fruit images for training and evaluating detection models. Crucially, the researchers then designed YOLOcF, an improved version of the YOLOv5 object detection architecture, which is an “anchor-based” system. Anchor-based systems use pre-defined shapes and sizes (anchors) as starting points for predicting object locations and dimensions. This contrasts with anchor-free methods which directly predict object properties without relying on predefined anchors.
YOLOcF was then rigorously compared to several other state-of-the-art YOLO variants: YOLOv5n, YOLOv7t, YOLOv8n, YOLOv9t, YOLOv10n, and YOLOv11n. The performance of these models was assessed based on three key metrics: accuracy (measured as mean Average Precision, or mAP), speed (measured in frames per second, or fps), and computational cost (measured in parameters and GFLOPs – Giga Floating Point Operations, a unit of computational work).
The results showed that YOLOcF achieved a higher mAP than all other YOLO variants except for YOLOv9t, with improvements ranging from 0.7% to 1.3% over the other models. While YOLOv9t had slightly better accuracy, YOLOcF significantly outperformed it in terms of speed, reaching 323 fps, the highest of all models tested. Importantly, YOLOcF’s computational cost, as measured by parameters and GFLOPs, was lower than most other models, making it more suitable for deployment on devices with limited processing power.
These findings build on previous work in the field. For example,[2] highlighted the challenges of applying deep learning to crop production, particularly the need for large, annotated datasets. The CFruit dataset directly addresses this need, providing a valuable resource for training and benchmarking fruit detection models. Similarly,[3] demonstrated the potential of the YOLOv3 architecture for tomato detection, specifically through modifications like using circular bounding boxes to improve localization accuracy. The current study extends this work by using the more recent YOLOv5 as a base and further optimizing it for fruit detection.
The study also assessed the robustness of the models by analyzing their performance in counting fruit. YOLOcF displayed the highest R-squared (R2) value of 0.422, indicating that it was the most reliable model for accurately counting fruit, even in challenging conditions. This robustness is a key advantage for applications like yield estimation.
In essence, YOLOcF represents a step forward in fruit detection technology. Its combination of high accuracy, speed, and low computational cost makes it a practical solution for a wide range of agricultural applications, and its robust performance ensures reliable results in real-world settings. The lightweight nature of the model also means it is easily deployable on mobile devices, facilitating on-the-ground data collection and analysis.
				
AgricultureBiotechPlant Science
				
References
Main Study
1) An anchor-based YOLO fruit detector developed on YOLOv5
					Published 5th September, 2025
https://doi.org/10.1371/journal.pone.0331012
Related Studies
				
				
Related Articles
 
									 
					