Turkish Journal of Electrical Engineering and Computer Sciences
Spatiotemporal features have significant importance in human action recognition, as they provide the actor's shape and motion characteristics specific to each action class. This paper presents a new deep spatiotemporal human action representation, the deep temporal motion descriptor (DTMD), which shares the attributes of holistic and deep learned features. To generate the DTMD descriptor, the actor?s silhouettes are gathered into single motion templates by applying motion history images. These motion templates capture the spatiotemporal movements of the actor and compactly represent the human actions using a single 2D template. Then deep convolutional neural networks are used to compute discriminative deep features from motion history templates to produce the DTMD. Later, DTMD is used for learning a model to recognize human actions using a softmax classifier. The advantage of DTMD are that DTMD is automatically learned from videos and contains higher-dimensional discriminative spatiotemporal representations as compared to handcrafted features; DTMD reduces the computational complexity of human activity recognition as all the video frames are compactly represented as a single motion template; and DTMD works effectively for single and multiview action recognition. We conducted experiments on three challenging datasets: MuHAVI-Uncut, iXMAS, and IAVID-1. The experimental findings reveal that DTMD outperforms previous methods and achieves the highest action prediction rate on the MuHAVI-Uncut dataset.
Human activity recognition, deep convolutional neural network, motion history images, deep temporal motion descriptor, computer vision
Nida, Nudrat; YOUSAF, MUHAMMAD HAROON; IRTAZA, AUN; and Velastin, Sergio A.
"Deep temporal motion descriptor (DTMD) for human action recognition,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 28:
3, Article 13.
Available at: https://journals.tubitak.gov.tr/elektrik/vol28/iss3/13
Computer Engineering Commons, Computer Sciences Commons, Electrical and Computer Engineering Commons