TY - GEN
T1 - UNSUPERVISED VIDEO SEGMENTATION ALGORITHMS BASED ON FLEXIBLY REGULARIZED MIXTURE MODELS
AU - Launay, Claire
AU - Vacher, Jonathan
AU - Coen-Cagli, Ruben
N1 - Funding Information:
RCC is supported by NIH ( EY031166 and EY030578). JV is supported by ANR (ANR-19-NEUC-0003-01).
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - We propose a family of probabilistic segmentation algorithms for videos that rely on a generative model capturing static and dynamic natural image statistics. Our framework adopts flexibly regularized mixture models (FlexMM) [1], an efficient method to combine mixture distributions across different data sources. FlexMMs of Student-t distributions successfully segment static natural images, through uncertainty-based information sharing between hidden layers of CNNs. We further extend this approach to videos and exploit FlexMM to propagate segment labels across space and time. We show that temporal propagation improves temporal consistency of segmentation, reproducing qualitatively a key aspect of human perceptual grouping. Besides, Student-t distributions can capture statistics of optical flows of natural movies, which represent apparent motion in the video. Integrating these motion cues in our temporal FlexMM further enhances the segmentation of each frame of natural movies. Our probabilistic dynamic segmentation algorithms thus provide a new framework to study uncertainty in human dynamic perceptual segmentation.
AB - We propose a family of probabilistic segmentation algorithms for videos that rely on a generative model capturing static and dynamic natural image statistics. Our framework adopts flexibly regularized mixture models (FlexMM) [1], an efficient method to combine mixture distributions across different data sources. FlexMMs of Student-t distributions successfully segment static natural images, through uncertainty-based information sharing between hidden layers of CNNs. We further extend this approach to videos and exploit FlexMM to propagate segment labels across space and time. We show that temporal propagation improves temporal consistency of segmentation, reproducing qualitatively a key aspect of human perceptual grouping. Besides, Student-t distributions can capture statistics of optical flows of natural movies, which represent apparent motion in the video. Integrating these motion cues in our temporal FlexMM further enhances the segmentation of each frame of natural movies. Our probabilistic dynamic segmentation algorithms thus provide a new framework to study uncertainty in human dynamic perceptual segmentation.
KW - Graphical Models
KW - Mixture Models
KW - Optical Flows
KW - Temporal Propagation
KW - Video Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85146660306&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146660306&partnerID=8YFLogxK
U2 - 10.1109/ICIP46576.2022.9897691
DO - 10.1109/ICIP46576.2022.9897691
M3 - Conference contribution
AN - SCOPUS:85146660306
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 4073
EP - 4077
BT - 2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings
PB - IEEE Computer Society
T2 - 29th IEEE International Conference on Image Processing, ICIP 2022
Y2 - 16 October 2022 through 19 October 2022
ER -