Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Fecha de publicación
2020-09-01Palabras clave
Dynamic GesturesFeature Extraction
Peruvian Signal Language
Sign Language Recognition
Video Summarization
Metadatos
Mostrar el registro completo del ítemJournal
Proceedings of the 2020 IEEE 27th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2020DOI
10.1109/INTERCON50315.2020.9220243Enlaces adicionales
https://ieeexplore.ieee.org/abstract/document/9220243Resumen
In peruvian sign language (PSL), recognition of static gestures has been proposed earlier. However, to state a conversation using sign language, it is also necessary to employ dynamic gestures. We propose a method to extract a feature vector for dynamic gestures of PSL. We collect a dataset with 288 video sequences of words related to dynamic gestures and we state a workflow to process the keypoints of the hands, obtaining a feature vector for each video sequence with the support of a video summarization technique. We employ 9 neural networks to test the method, achieving an average accuracy ranging from 80% and 90%, using 10 fold cross-validation.Tipo
info:eu-repo/semantics/articleDerechos
info:eu-repo/semantics/embargoedAccessIdioma
engDescripción
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.ae974a485f413a2113503eed53cd6c53
10.1109/INTERCON50315.2020.9220243
Scopus Count
Colecciones