Abstract:Tea grade evaluation is an important technical index to detect the quality of tea leaves. By extracting image features and spectral features under hyperspectral imaging technique of black tea, a fast and nondestructive discriminative model based on the map fusion method was constructed to be applicable to the grade evaluation of Yingde black tea. Firstly, three different grades of black tea samples were prepared, and the spectral data were visualized by dimensionality reduction using t distributed stochastic neighbor embedding and principal component analysis, and then the characteristic wavelengths of each chemical value were extracted from the perspective of influencing the intrinsic quality by successive projections algorithm, followed by the best combination of characteristic wavelengths characterizing its intrinsic quality by multi-model consensus strategy and competitive adaptive reweighted sampling-successive projections algorithm screening, followed by the establishment of a genetic algorithm optimization support vector machine based grade discrimination model, and the accuracy of its model was 88% for the training set and 78.33% for the prediction set. In order to fuse the shape and texture differences, the hyperspectral image corresponding to the best feature wavelength combination were firstly extracted;and then the image mask was used to eliminate the interference of the background and the principal component analysis was used to eliminate the redundant information between multi-wavelength images, and then the gray level covariance matrix and local binary pattern algorithms were used to extract the three-dimensional principal component images before principal component analysis and fuse them with feature spectra, moreover, the genetic algorithm optimized support vector machine grade discrimination model based on feature fusion was established, and the best discrimination effect was based on the third principal component image feature fusion model, which the accuracy of the training set was improved to be 98% and the accuracy of the prediction set was improved to be 96.67%.