Human action recognition based on fusion features extraction of adaptive background subtraction and optical flow model (Q1665565)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Human action recognition based on fusion features extraction of adaptive background subtraction and optical flow model |
scientific article; zbMATH DE number 6926244
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Human action recognition based on fusion features extraction of adaptive background subtraction and optical flow model |
scientific article; zbMATH DE number 6926244 |
Statements
Human action recognition based on fusion features extraction of adaptive background subtraction and optical flow model (English)
0 references
27 August 2018
0 references
Summary: A novel method based on hybrid feature is proposed for human action recognition in video image sequences, which includes two stages of feature extraction and action recognition. Firstly, we use adaptive background subtraction algorithm to extract global silhouette feature and optical flow model to extract local optical flow feature. Then we combine global silhouette feature vector and local optical flow feature vector to form a hybrid feature vector. Secondly, in order to improve the recognition accuracy, we use an optimized Multiple Instance Learning algorithm to recognize human actions, in which an Iterative Querying Heuristic (IQH) optimization algorithm is used to train the Multiple Instance Learning model. We demonstrate that our hybrid feature-based action representation can effectively classify novel actions on two different data sets. Experiments show that our results are comparable to, and significantly better than, the results of two state-of-the-art approaches on these data sets, which meets the requirements of stable, reliable, high precision, and anti-interference ability and so forth.
0 references
0.795307993888855
0 references
0.7617847323417664
0 references
0.738878607749939
0 references
0.713986337184906
0 references
0.7079066634178162
0 references