Document Type : Research Paper

Authors

Computer Science Dept., University of Technology-Iraq, Alsina’a street, 10066 Baghdad, Iraq.

Abstract

Background subtraction is the most prominent technique applied in the domain of detecting moving objects. However, there is a wide range of different background subtraction models. Choosing the best model that addresses a number of challenges is still a vital research area.
Therefore, in this article we present a comparative analysis of three promising algorithms used in this domain, GMM, KNN and ViBe. CDnet 2014 is the benchmark dataset used in this analysis with several quantitative evaluation metrics like precession, recall, f-measures, false positive rate, false negative rate and PWC. In addition, qualitative evaluations are illustrated in snapshots to depict the visual scenes evaluation. ViBe algorithm outperform other algorithms for overall evaluations.

Highlights

  • Implementing and comparing the results of GMM, KNN, and ViBe background subtraction algorithms.
  • Applying algorithms on dynamic background scenes from a well-known CDnet 2014 benchmark dataset.
  • A wide range of evaluation metrics has been used (Accuracy, Precession, Recall, F1, FPR, FNR, and PWC).
  • ViBe background subtraction algorithm shows the best overall performance.

Keywords

Main Subjects

[1]  N. K. Jain, R. K. Saini, and P. Mittal, “A review on traffic monitoring system techniques,” in Soft     Computing: Theories and Applications, Springer, 2019, pp. 569–577. https://doi.org/10.1007/978-981-13-0589-4_53.
[2] D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons, and A. K. Jain, “A background model initialization algorithm for video surveillance,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2001, vol. 1, pp. 733–740. https://doi.org/10.1109/iccv.2001.937598.
[3] D. Zeng, M. Zhu, and A. Kuijper, “Combining background subtraction algorithms with convolutional neural network,” J. Electron. Imaging, vol. 28, no. 1, p. 13011, 2019. https://doi.org/10.3390/a12060115.
[4] K. Goyal and J. Singhai, “Review of background subtraction methods using Gaussian mixture model for video surveillance systems,” Artif. Intell. Rev., vol. 50, no. 2, pp. 241–259, 2018. https://doi.org/10.1007/s10462-017-9542-x.
[5] T. Bouwmans, “Recent advanced statistical background modeling for foreground detection-a systematic survey,” Recent Patents Comput. Sci., vol. 4, no. 3, pp. 147–176, 2011. https://doi.org/10.2174/2213275911104030147
[6] S. Lee and D. Kim, “Background subtraction using the factored 3-way restricted Boltzmann machines,” arXiv Prepr. arXiv1802.01522, 2018. https://doi.org/10.48550/arXiv.1802.01522.
[7] P. W. Power and J. A. Schoonees, “Understanding background mixture models for foreground segmentation,” in Proceedings image and vision computing New Zealand, 2002, vol. 2002, pp. 10–11. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.5222&rep=rep1&type=pdf
[8] M. Hadiuzzaman, N. Haque, F. Rahman, S. Hossain, M. R. K. Siam, and T. Z. Qiu, “Pixel-based heterogeneous traffic measurement considering shadow and illumination variation,” Signal, Image Video Process., vol. 11, no. 7, pp. 1245–1252, 2017. https://doi.org/10.1007/s11760-017-1081-z
[9] Z. Zivkovic and F. Van Der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognit. Lett., vol. 27, no. 7, pp. 773–780, 2006. https://doi.org/10.1016/j.patrec.2005.11.005
[10] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), 1999, vol. 2, pp. 246–252. https://doi.org/10.1109/cvpr.1999.784637
[11] O. Barnich and M. Van Droogenbroeck, “ViBe: a powerful random technique to estimate the background in video sequences,” in 2009 IEEE international conference on acoustics, speech and signal processing, 2009, pp. 945–948. https://doi.org/10.1109/icassp.2009.4959741
[12] P.-L. St-Charles, G.-A. Bilodeau, and R. Bergevin, “Subsense: A universal change detection method with local adaptive sensitivity,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 359–373, 2014. https://doi.org/10.1109/tip.2014.2378053
[13] M. Hua, Y. Li, and Y. Luo, “Robust Background Modeling with Kernel Density Estimation.,” Int. J. Online Eng., vol. 11, no. 8, 2015. https://doi.org/10.3991/ijoe.v11i8.4880
[14] S. Javed, S. H. Oh, and S. K. Jung, “An Improved Pixel-Based Adaptive Background Segmenter for Visual Surveillance System,” pp. 579–582, 2014. https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE02374682
[15] O. Munteanu, T. Bouwmans, E. Zahzah, and R. Vasiu, “The detection of moving objects in video by background subtraction using Dempster-Shafer theory,” Trans. Electron. Commun., vol. 60, no. 1, 2015. https://www.researchgate.net/publication/289779309
[16] F. El Baf, T. Bouwmans, and B. Vachon, “Type-2 fuzzy mixture of Gaussians model: application to background modeling,” in International Symposium on Visual Computing, 2008, pp. 772–781. https://doi.org/10.1007/978-3-540-89639-5_74
[17] R. Chang, T. Gandhi, and M. M. Trivedi, “Vision modules for a multi-sensory bridge monitoring approach,” in Proceedings. The 7th International IEEE Conference on Intelligent Transportation Systems (IEEE Cat. No. 04TH8749), 2004, pp. 971–976. https://doi.org/10.1109/itsc.2004.1399038
[18] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principles and practice of background maintenance,” in Proceedings of the seventh IEEE international conference on computer vision, 1999, vol. 1, pp. 255–261. https://doi.org/10.1109/iccv.1999.791228
[19] F. Lei and X. Zhao, “Adaptive background estimation of underwater using Kalman-Filtering,” in 2010 3rd International Congress on Image and Signal Processing, 2010, vol. 1, pp. 64–67. https://doi.org/10.1109/cisp.2010.5647080
[20] G. T. Cinar and J. C. Pr’incipe, “Adaptive background estimation using an information theoretic cost for hidden state estimation,” in The 2011 International Joint Conference on Neural Networks, 2011, pp. 489–494. https://doi.org/10.1109/ijcnn.2011.6033261
[21]     X. Yu, X. Chen, and H. Zhang, “Accurate motion detection in dynamic scenes based on ego-motion estimation and optical flow segmentation combined method,” in 2011 Symposium on Photonics and Optoelectronics (SOPO), 2011, pp. 1–4. https://doi.org/10.1109/sopo.2011.5780637
[22] M. Xiao, C. Han, and X. Kang, “A background reconstruction for dynamic scenes,” in 2006 9th International Conference on Information Fusion, 2006, pp. 1–7. https://doi.org/10.1109/icif.2006.301727
[23] D. Butler, S. Sridharan, and V. M. J. Bove, “Real-time adaptive background segmentation,” in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03)., 2003, vol. 3, pp. III--349. https://doi.org/10.1109/icassp.2003.1199481
[24] L. Wang and C. Pan, “Effective multi-resolution background subtraction,” in 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 909–912. https://doi.org/10.1109/icassp.2011.5946552
[25] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?,” J. ACM, vol. 58, no. 3, pp. 1–37, 2011. https://doi.org/10.5772/38267
[26] D. Farcas, C. Marghes, and T. Bouwmans, “Background subtraction via incremental maximum margin criterion: a discriminative subspace approach,” Mach. Vis. Appl., vol. 23, no. 6, pp. 1083–1101, 2012. https://doi.org/10.1007/s00138-012-0421-9
[27] N. Vaswani, T. Bouwmans, S. Javed, and P. Narayanamurthy, “Robust subspace learning: Robust PCA, robust subspace tracking, and robust subspace recovery,” IEEE Signal Process. Mag., vol. 35, no. 4, pp. 32–55, 2018. https://doi.org/10.1109/msp.2018.2826566
[28] I. Junejo, A. Bhutta, and H. Foroosh, “Dynamic scene modeling for object detection using single-class SVM,” in Proc. of IEEE International Conference on Image Processing (ICIP), 2010, vol. 1, pp. 1541–1544. https://www.researchgate.net/publication/266272568
[29] L. Maddalena and A. Petrosino, “Self-organizing background subtraction using color and depth data,” Multimed. Tools Appl., vol. 78, no. 9, pp. 11927–11948, 2019. https://doi.org/10.1007/s11042-018-6741-7
[30] T. Minematsu, A. Shimada, H. Uchiyama, and R. Taniguchi, “Analytics of deep neural network-based background subtraction,” J. Imaging, vol. 4, no. 6, p. 78, 2018. https://doi.org/10.3390/jimaging4060078
[31] T. Bouwmans, S. Javed, M. Sultana, and S. K. Jung, “Deep neural network concepts for background subtraction: A systematic review and comparative evaluation,” Neural Networks, 2019. https://doi.org/10.1016/j.neunet.2019.04.024
[32] M. Braham and M. Van Droogenbroeck, “Deep background subtraction with scene-specific convolutional neural networks,” in 2016 international conference on systems, signals and image processing (IWSSIP), 2016, pp. 1–4. https://doi.org/10.1109/iwssip.2016.7502717
[33] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105. https://doi.org/10.1145/3065386
[34] P. Xu, M. Ye, X. Li, Q. Liu, Y. Yang, and J. Ding, “Dynamic background learning through deep auto-encoder networks,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 107–116. https://doi.org/10.1145/2647868.2654914
[35] T. Bouwmans, “Subspace learning for background modeling: A survey,” Recent Patents Comput. Sci., vol. 2, no. 3, pp. 223–234, 2009. https://doi.org/10.2174/2213275910902030223
[36] A. Cioppa, M. Van Droogenbroeck, and M. Braham, “Real-time semantic background subtraction,” in 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 3214–3218. https://doi.org/10.1109/icip40778.2020.9190838
[37] R. A. Hadi, L. E. George, and M. J. Mohammed, “A computationally economic novel approach for real-time moving multi-vehicle detection and tracking toward efficient traffic surveillance,” Arab. J. Sci. Eng., vol. 42, no. 2, pp. 817–831, 2017. https://doi.org/10.1007/s13369-016-2351-8
[38] M. Yasir and Y. Ali, “Review on Real Time Background Extraction: Models, Applications, Environments, Challenges and Evaluation Approaches,” 2021. https://doi.org/10.3991/ijoe.v17i02.18013
[39]     C. Lin, B. Yan, and W. Tan, “Foreground detection in surveillance video with fully convolutional semantic network,” in 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 4118–4122. https://doi.org/10.1109/icip.2018.8451816
[40] D. A. Reynolds, “Gaussian Mixture Models.,” Encycl. biometrics, vol. 741, pp. 659–663, 2009. https://doi.org/10.1007/springerreference_70943
[41] T. Bouwmans, F. El Baf, and B. Vachon, “Statistical background modeling for foreground detection: A survey,” in Handbook of pattern recognition and computer vision, World Scientific, 2010, pp. 181–199. https://doi.org/10.1142/9789814273398_0008
[42] B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in 2007 IEEE International Conference on Multimedia and Expo, 2007, pp. 1826–1829. https://doi.org/10.1109/icme.2007.4285028
[43] I. Martins, P. Carvalho, L. Corte-Real, and J. L. Alba-Castro, “BMOG: boosted Gaussian mixture model with controlled complexity,” in Iberian Conference on Pattern Recognition and Image Analysis, 2017, pp. 50–57. https://doi.org/10.1007/s10044-018-0699-y
[44] X. Lu, C. Xu, L. Wang, and L. Teng, “Improved background subtraction method for detecting moving objects based on GMM,” IEEJ Trans. Electr. Electron. Eng., vol. 13, no. 11, pp. 1540–1550, 2018. https://doi.org/10.1002/tee.22718
[45] S. T. Ali, K. Goyal, and J. Singhai, “Moving object detection using self adaptive Gaussian Mixture Model for real time applications,” in 2017 International Conference on Recent Innovations in Signal processing and Embedded Systems (RISE), 2017, pp. 153–156. https://doi.org/10.1109/rise.2017.8378144
[46] O. Barnich and M. Van Droogenbroeck, “ViBe: A universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process., vol. 20, no. 6, pp. 1709–1724, 2010. https://doi.org/10.1109/tip.2010.2101613
[47] L. F. Huang, Q. Y. Chen, J. F. Lin, and H. Z. Lin, “Block background subtraction method based on ViBe,” in Applied Mechanics and Materials, 2014, vol. 556, pp. 3549–3552. https://doi.org/10.4028/www.scientific.net/amm.556-562.3549
[48] X. Zhou, X. Liu, A. Jiang, B. Yan, and C. Yang, “Improving video segmentation by fusing depth cues and the visual background extractor (ViBe) algorithm,” Sensors, vol. 17, no. 5, p. 1177, 2017. https://doi.org/10.3390/s17051177
[49] Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “CDnet         2014: An expanded change detection benchmark dataset,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 387–394. https://doi.org/10.1109/cvprw.2014.126.