
Reservoir porosity prediction using the VAE-BiGRU-Attention model: an example study of middle-low permeability sandstone reservoir
BinXin ZENG, Hui XIAO, ZiMei HAO, HuanHuan LIU
Prog Geophy ›› 2025, Vol. 40 ›› Issue (2) : 658-669.
Reservoir porosity prediction using the VAE-BiGRU-Attention model: an example study of middle-low permeability sandstone reservoir
Porosity is an indispensable key physical parameter in reservoir evaluation, and there exists a complex and potential relationship between well logging curves and porosity. In previous studies, incomplete feature extraction of well logging curves and simple model construction have limited the accuracy of porosity prediction. To improve the prediction accuracy, this study innovatively combines Variational Auto-Encoder (VAE), Bidirectional Gated Recurrent Unit (BiGRU), and Attention mechanism to construct the VAE-BiGRU-Attention model. VAE can effectively learn the latent representation of data, enhancing data representation capability; BiGRU excels at capturing sequential data information, particularly suitable for handling the feature of porosity changing with depth; and the introduction of the Attention mechanism dynamically calculates the attention Attention weights of each time step, allowing the model to more accurately focus on key features and achieve better prediction results. To verify the effectiveness of the model, this paper is compared with Deep Neural Network (DNN), Recurrent Neural Network (RNN), and BiGRU-Attention through comparative experiments. The results show that the VAE-BiGRU-Attention model has a Mean Squared Error (MSE) of 0.995, Mean Absolute Error (MAE) of 0.698, and Root Mean Square Error (RMSE) of 0.998. Compared to other models, it exhibits significant improvement, effectively enhancing the accuracy of porosity prediction and providing a more reliable method for reservoir porosity prediction.
Porosity / Reservoir evaluation / Logging curves / Variational autoencoder / Bidirectional gated recurrent unit / VAE-BiGRU-Attention model
|
|
|
|
Graves A. 2012. Long short-term memory. //Graves A. Supervised Sequence Labelling with Recurrent neural Networks. Berlin, Heidelberg: Springer, 37-45, doi: 10.1007/978-3-642-24797-2_4.
|
Gulcehre C, Moczulski M, Denil M, et al. 2016. Noisy activation functions. //Proceedings of the 33rd International Conference on Machine Learning. New York: JMLR. org, 3059-3068.
|
|
|
Huang G, Liu Z, Van Der Maaten L, et al. 2017. Densely connected convolutional networks. //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 4700-4708.
|
Kingma D P, Welling M. 2013. Auto-encoding Variational Bayes. doi: 10.48550/arXiv.1312.6114.
|
|
|
|
Myers L, Sirois M J. 2014. Spearman correlation coefficients, differences between. //Wiley StatsRef: Statistics Reference Online. Washington: John Wiley & Sons.
|
|
Pelikan M. 2005. Bayesian optimization algorithm. //Pelikan M. Hierarchical Bayesian Optimization Algorithm: Toward A New Generation of Evolutionary Algorithms. Berlin, Heidelberg: Springer, 31-48, doi: 10.1007/978-3-540-32373-0_3.
|
|
|
|
Socher R, Lin C C Y, Ng A Y, et al. 2011. Parsing natural scenes and natural language with recursive neural networks. //Proceedings of the 28th International Conference on Machine Learning. Washington: Omnipress, 129-136.
|
|
Vaswani A, Shazeer N, Parmar N, et al. 2017. Attention is all you need. //Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc.
|
|
|
|
|
Zheng W F, Cheng P Y, Cai Z T, et al. 2022. Research on network attack detection model based on BiGRU-attention. //Proceedings of the 4th International Conference on Frontiers Technology of Information and Computer (ICFTIC). Qingdao: IEEE, 979-982, doi: 10.1109/ICFTIC57696.2022.10075310.
|
|
|
|
|
|
|
|
|
|
感谢审稿专家提出的修改意见和编辑部的大力支持!
/
〈 |
|
〉 |