Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition

Benefited from the powerful ability of spatial temporal Graph Convolutional Networks (ST-GCNs), skeleton-based human action recognition has gained promising success. However, the node interaction through message propagation does not always provide complementary information. Instead, it May even produce destructive noise and thus make learned representations indistinguishable. Inevitably, the graph representation would also become over-smoothing especially when multiple GCN layers are stacked. This paper proposes spatial-temporal graph deconvolutional networks (ST-GDNs), a novel and flexible graph deconvolution technique, to alleviate this issue. At its core, this method provides a better message aggregation by removing the embedding redundancy of the input graphs from either node-wise, frame-wise or element-wise at different network layers. Extensive experiments on three current most challenging benchmarks verify that ST-GDN consistently improves the performance and largely reduce the model size on these datasets.

Peng Wei, Shi Jingang, Zhao Guoying

A1 Journal article – refereed

W. Peng, J. Shi and G. Zhao, "Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition," in IEEE Signal Processing Letters, vol. 28, pp. 244-248, 2021, doi: 10.1109/LSP.2021.3049691

https://doi.org/10.1109/LSP.2021.3049691 http://urn.fi/urn:nbn:fi-fe2021042611784