Analyzing motion patterns in traffic videos can directly lead to generate some high-level descriptions of the video content. In this paper, an unsupervised method is proposed to automatically discover motion patterns occurring in traffic video scenes. For this purpose, based on optical ﬂow features extracted from video clips, an improved Group Sparse Topical Coding framework is applied for learning of semantic motion patterns. Then, each video clip can be sparsely represented as a weighted sum of learned patterns which can further be employed in very large range of applications. Experimental results show that our proposed approach finds accurately the motion patterns and gives a meaningful representation for the video.
Motion patterns, Group Sparse Topical Coding, traffic scene