Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Issue Date
2020-09-01Keywords
Dynamic GesturesFeature Extraction
Peruvian Signal Language
Sign Language Recognition
Video Summarization
Metadata
Show full item recordJournal
Proceedings of the 2020 IEEE 27th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2020DOI
10.1109/INTERCON50315.2020.9220243Additional Links
https://ieeexplore.ieee.org/abstract/document/9220243Abstract
In peruvian sign language (PSL), recognition of static gestures has been proposed earlier. However, to state a conversation using sign language, it is also necessary to employ dynamic gestures. We propose a method to extract a feature vector for dynamic gestures of PSL. We collect a dataset with 288 video sequences of words related to dynamic gestures and we state a workflow to process the keypoints of the hands, obtaining a feature vector for each video sequence with the support of a video summarization technique. We employ 9 neural networks to test the method, achieving an average accuracy ranging from 80% and 90%, using 10 fold cross-validation.Type
info:eu-repo/semantics/articleRights
info:eu-repo/semantics/embargoedAccessLanguage
engDescription
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.ae974a485f413a2113503eed53cd6c53
10.1109/INTERCON50315.2020.9220243
Scopus Count
Collections