Abstract:
Objective Existing automatic hedge-trimming methods fundamentally rely on manually predefined target contours and are unable to make autonomous decisions according to actual growth conditions. To address this limitation, this study aims to propose an intelligent trimming-point generation framework capable of autonomously completing the processes of “perception-understanding-decision-making”, thereby promoting hedge trimming from programmed automation to intelligent autonomy.
Method A three-level technical framework of “intelligent segmentation-geometric fitting-task generation” was proposed. First, high-fidelity hedge point clouds were reconstructed from surround-view videos using neural radiance fields (NeRF). Then, PointNet++ was innovatively improved to build Hedge-PointNet, in which point-wise Softmax probability modeling and multi-round inference fusion strategies were introduced to achieve accurate semantic segmentation of hedges and background. Subsequently, robust geometric fitting methods, including random sample consensus (RANSAC), were adaptively selected according to semantic categories to reconstruct the target trimming surface. Finally, actual trimming points were intelligently extracted through surface meshing and point-cloud projection strategies.
Result Experimental results in complex real-world scenes showed that: (1) the proposed Hedge-PointNet achieved a mean intersection over union (mIoU) of 0.971 in semantic segmentation, significantly outperforming the baseline model; (2) in the spherical fitting task, the root mean square error of the RANSAC algorithm was 0.005 1 m, which was 96.7% lower than that of the conventional least-squares method, demonstrating strong robustness to outliers; and (3) the generated trimming-point set only needed to cover 36.1% of the potential points on the target surface to achieve the same trimming effect, reducing ineffective trimming actions by 63.9% and achieving a favorable balance between accuracy and efficiency.
Conclusion This study not only realized autonomous determination of hedge trimming contours and intelligent generation of operation instructions but also can serve as a perception and decision-making module in automated trimming equipment, providing target parameters and operation-point inputs for subsequent path planning and mechanical execution. The proposed framework shows good generality and can provide a reference for target recognition and operation decision-making in garden maintenance, crop training and pruning, and related robotic 3D operation tasks.