Using Tiny-YOLO oneclass to detect each person in the frame and use AlphaPose to get skeleton-pose and then use ST-GCN model to predict action from every 30 frames of each person tracks. Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down...
Using Tiny-YOLO oneclass to detect each person in the frame and useAlphaPoseto get skeleton-pose and then useST-GCNmodel to predict action from every 30 frames of each person tracks. Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down. ...
Third, the ST-GCN is used for abnormal behavior recognition. The image quality of the dataset before and after a CycleGAN enhancement is compared, the convergence curves of LTWOA under four test functions are compared, and the mean average accuracy mAP of the LTWOA-Tiny-YOLOv3 model is ...
实验结果表明,在公共数据集NTU-RGB+D的两种划分准则下,最终获取的最优模型关节流Top-1精度分别达到88.60%和95.11%,肢干流Top-1精度分别达到90.58%和96.1 2%,融合Top-1精度分别达到91.66%和97.12%,相比基准网络(ST-GCN)均有较大提升,同时在自建康复数据集下识别率均达到97%以上.融合后的算法对于康复场景中不同...
A Human Motion Detection Algorithm Based on Improved YOLOv5s and ST-GCN Aiming at the problem that existing human motion detection algorithms have low recognition accuracy on complex backgrounds, this paper proposes a human act... Y Ma,X Wu,H Lian,... - Chinese Control & Decision Conference...
Using Tiny-YOLO oneclass to detect each person in the frame and useAlphaPoseto get skeleton-pose and then useST-GCNmodel to predict action from every 30 frames of each person tracks. Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down. ...
Using Tiny-YOLO oneclass to detect each person in the frame and useAlphaPoseto get skeleton-pose and then useST-GCNmodel to predict action from every 30 frames of each person tracks. Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down. ...
Using Tiny-YOLO oneclass to detect each person in the frame and useAlphaPoseto get skeleton-pose and then useST-GCNmodel to predict action from every 30 frames of each person tracks. Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down. ...