Fig. 1

Workflow of SVEA and the architecture of the Enhanced AlexNet model. a Workflow of SVEA. b Detailed parameters of the Enhanced AlexNet model. c Principle of the multi-head self-attention mechanism. d Operational principle of the multi-scale convolutional module. e Comparison between the global average pooling layer and the fully connected layer. MHSA is multi-head self-attention; MSC is multi-scale convolution; GAP is global average pooling; FC is fully connected layer