- A combined algorithm based on shape and motion features of human activity.
- A single key pose is used for estimation of shape using edges.
- A single global key pose is extracted from video signal by exploiting local notion.
- The temporal motion feature is computed using R-transform.
- Robustness of the algorithm is demonstrated on the varied dataset.
The aim of this paper is to present a novel integrated framework for the recognition of human actions using a spatial distribution of edge gradient (SDEG) of human pose and detailed geometric orientation of a human silhouette in a video sequence. The combined descriptor endows a wealthy feature vector dictionary having both the appearance and angular kinematics information that significantly wraps the local and global information and provides discriminative depiction for the action recognition. The SDEG is computed on a still image at different levels of resolution of sub-images, and still images of the human poses are extracted from the input video sequence using fuzzy trapezoidal membership function based on the normalized histogram distance between the contiguous segment frames. The change of geometric orientation of human silhouette with time is computed using normalized R-Transform. To validate the performance of the proposed approach, extensive experiments are conducted on five publicly available human action datasets i.e. Weizmann, KTH, Ballet Movements, Multi-view i3dPost, and IXMAS. The recognition accuracy achieved on these datasets demonstrates that the proposed approach has an abundant discriminating power of recognizing the variety of actions. Moreover, the proposed approach yields superior results when compared with similar state-of-the-art methods.