JOURNAL ARTICLE

Hybrid Background Subtraction in video using Bi-level CodeBook model

Abstract

Detection of Objects in Video is a highly\ndemanding area of research. The Background Subtraction\nAlgorithms can yield better results in Foreground Object\nDetection. This work presents a Hybrid CodeBook based\nBackground Subtraction to extract the foreground ROI from the\nbackground. Codebooks are used to store compressed\ninformation by demanding lesser memory usage and high speedy\nprocessing. This Hybrid method which uses Block-Based and\nPixel-Based Codebooks provide efficient detection results; the\nhigh speed processing capability of block based background\nsubtraction as well as high Precision Rate of pixel based\nbackground subtraction are exploited to yield an efficient\nBackground Subtraction System. The Block stage produces a\ncoarse foreground area, which is then refined by the Pixel stage.\nThe system’s performance is evaluated with different block sizes\nand with different block descriptors like 2D-DCT, FFT etc. The\nExperimental analysis based on statistical measurements yields\nprecision, recall, similarity and F measure of the hybrid system\nas 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus\nproves the efficiency of the novel system.

Keywords:
Background subtraction Codebook Computer science Pixel Artificial intelligence Block (permutation group theory) Object detection Subtraction Computer vision Precision and recall Pattern recognition (psychology) Mathematics

Metrics

6
Cited By
0.72
FWCI (Field Weighted Citation Impact)
16
Refs
0.78
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
IoT-based Smart Home Systems
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.