Ever wondered how a security camera knows when something moves? This demo shows exactly that — in real time.
Your camera feed comes in on the left. The algorithm learns what "normal" looks like (the background — empty room, no movement). Then, whenever something moves — a hand, a face, anything — it lights up white in the second panel. The third panel shows just the moving object in color. And the fourth panel shows what the camera thinks the empty scene looks like (it keeps learning as lighting changes).
🔧 Try this: Move your hand slowly in front of the camera. Watch it turn white in the FG Mask panel. Then adjust the α slider — higher values make it react faster (good for changing light), lower values make it more stable. The K slider controls how many "layers" of background it can remember (like shadows, sunlight changes).
📖 The science: This is a Gaussian Mixture Model (GMM) — a machine learning algorithm that treats each pixel's color as a mix of possible values. When a pixel doesn't match any of its learned "normal" patterns, it's flagged as foreground. The math below explains the details, but what you're seeing is AI learning your room in real time.
Each pixel is modelled as a mixture of K Gaussian distributions. Components are ordered by ω/√σ² — high weight + low variance = stable background.
The first B components (by cumulative weight) define the background model. Pixels that don't match any background component become foreground (white).