
Research Article
Fusion of Multiscale Convolution and LSTM for Stock Price Prediction
@INPROCEEDINGS{10.1007/978-3-031-80713-8_17, author={Hui Sheng and Jiyong Hu and Menglin Wang and Min Liu and Haoming Zhang and Longjun Huang}, title={Fusion of Multiscale Convolution and LSTM for Stock Price Prediction}, proceedings={Data Information in Online Environments. 4th EAI International Conference, DIONE 2023, Nanchang, China, November 25--27, 2023, Proceedings}, proceedings_a={DIONE}, year={2025}, month={2}, keywords={Multi-scale convolution Attention mechanism LSTM Stock price forecasting}, doi={10.1007/978-3-031-80713-8_17} }
- Hui Sheng
Jiyong Hu
Menglin Wang
Min Liu
Haoming Zhang
Longjun Huang
Year: 2025
Fusion of Multiscale Convolution and LSTM for Stock Price Prediction
DIONE
Springer
DOI: 10.1007/978-3-031-80713-8_17
Abstract
The stock market has long been a topic of great interest in the financial sector. The job of predicting stock prices has also proven challenging. It is challenging to capture such intricate correlations in time-series data because traditional approaches usually assume constant data, which does not account for the time-varying, dynamic, and extremely noisy nature of time-series data. Data obtained from the stock market is fundamentally a multi-scale, non-smooth, nonlinear time series. To address the aforementioned issues, a stock prediction model (MCALSTMNet) that combines multi-scale convolutional attention (MCA) and a Long Short-Term Memory network (LSTM) is proposed, which is able to better capture the long-term dependence and complicated feature connections of stock data. The daily closing price is used as the target sequence, and the remaining characteristics are used as the exogenous sequence in the model’s first division of the many features of the stock time series into two sequences. The encoder then uses time convolution at various scales to extract the feature from the exogenous sequence’s many time scales. The hidden layer state and multi-scale features of the decoder are then weighted and fused using the attention mechanism (AM) to produce the context vector at the corresponding moment, combine it with the value of the target sequence at each moment as the decoder’s input, and finally acquire the prediction outcome. The MCALSTMNet model is tested on three datasets in this study: SSE50, SZSE Component Index, and CSI300. The experimental findings demonstrate that the MCA_LSTMNet model performs better in terms of prediction and generalization than other benchmark approaches.