Sign2Spoken

From PublicWiki
Jump to: navigation, search

Sign Language ->> Spoken Language Translation

Image Recognition:

[1] Chuanjun Li, B. Prabhakaran, A similarity measure for motion stream segmentation and recognition. Proceedings of the 6th international workshop on Multimedia data mining: mining integrated media and complex data, 2005, 89 - 94.

Comments: Used a cyber-glove to recognize specific motions

[2] Sylvie C.W. Ong and Surendra Ranganath. Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 873-891, June 2005.

Comments: A very good and comprehensive survey of automatic sign language analysis addressing the issues and current state-of-the-art.

[3] Helene Brashear, Valerie Henderson, Kwang-Hyun Park, Harley Hamilton, Seungyon Lee, Thad Starner. American sign language recognition in game development for deaf children. Proceedings of ASSETS 2006: The Sixth International ACM SIGACCESS Conference on Computers and Accessibility, Portland, OR, October 23-25, 79-86.

[4] Thad Starner, Joshua Weaver, Alex Pentland. Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video. In IEEE Transactions on Pattern Analysis and Machine Intelligence 1998, Volume 20 (12) 1371-1375 (MIT)

Comments: Fingerspelling and several signs in context. No spacial references and no facial movements (manual signs only). And of course, the infamous baseball cap. How do they detect signs above the head?

[5] Holger Fillbrandt, Suat Akyol, Karl-Friedrich Kraiss: Extraction of 3D Hand Shape and Posture from Image Sequences for Sign Language Recognition. AMFG 2003: 181-186 (Aachen University Germany)

Comments: Requires user to wear gloves, looks like mostly focused on hand shape.

[6] James Kramer, Larry Leifer. The Talking Glove: An Expressive and Receptive "Verbal" Communication Aid for the Deaf, Deaf-Blind, and Nonvocal. Conference on Computer Technology, Special Education, and Rehabilitation 1987: 12-16 (Stanford)

Comments: Complicated arm and hand device uses accelerometers to recognize fingerspelled words and a few signs and converts them to speech synthesis.

[7] Kouichi Murakami, Hitomi Taguchi. Gesture recognition using recurrent neural networks. SIGCHI 1991: 237-242 (Fujitsu Laboratories, Kawasaki)

Comments: Recognition within a VR environment. More fingerspelling and a few signs.

[8] Peter Vamplew. Recognition of Sign Language Using Neural Networks. PhD Thesis, Department of Computer Science, University of Tasmania 1996

Comments: Recognition with VR gloves.

[9] Omar Al-Jarrah, Alaa Halawani. Recognition of gestures in Arabic sign language using neuro-fuzzy systems. Artificial Intelligence 2001 Volume 133 117-138 (Jordan University of Science and Technology, Jordan)

Comments: Recognition with gloves. Recognizes < 50 signs in Arabic sign language, not continuous. Interesting use of feature vector to train neuro-fuzzy system.

[10] Jose Hernandez-Rebollar, Robert Lindeman, Nicholas Kyriakopoulos. A Multi-class Pattern Recognition System for Practical Finger Spelling Translation. IEEE Multimodal Interfaces 2002 (George Washington University)

[11] Pashaloudi N. Vassilia, Margaritis G. Konstantinos. Towards an Assistive Tool for Greek Sign Language Communication. IEEE International Conference on Advanced Learning Technologies 2003 (ICALT'03) 125 (University of Macedonia)

Comments: Recognition using HMMs, 33-sign vocab, 86% recognition rate.

[12] Annelies Braffort. ARGo: An Architecture for Sign Language Recognition and Interpretation. Proceedings of Gesture Workshop on Progress in Gestural Interaction 1996 (Université d'Orsay)

Bruno Bossard, Annelies Braffort, Michèle Jardino. Some Issues in Sign Language Processing. Gesture Workshop 2003: 90-100

[13] Christian Vogler and Dimitris Metaxas. Adapting Hidden Markov Models for ASL recognition by using three-dimensional computer vision methods. IEEE International Conference on Systems, Man and Cybernetics 1997, pp. 156-161 (University of Pennsylvania)

Christian Vogler, Dimitris N. Metaxas. Handshapes and Movements: Multiple-Channel American Sign Language Recognition. Gesture Workshop 2003: 247-258 (University of Pennsylvania)

[14]Christian Vogler and Siome Goldenstein. Analysis of Facial Expressions in American Sign Language. Proceedings of the 3rd Intl. Conf. on Universal Access in Human-Computer Interaction (UAHCI) 2005. (Gallaudet University)

[15] Helene Brashear, Thad Starner, Paul Lukowicz, Holger Junker. Using Multiple Sensors for Mobile Sign Language Recognition. IEEE International Symposium on Wearable Computers 2003 (Georgia Tech and Wearable Computing Lab, Zurich)

[16] Quan Yuan, Wen Gao, Hongxun Yao, Chunli Wang. Recognition of Strong and Weak Connection Models in Continuous Sign Language. International Conference on Pattern Recognition 2002 (Harbin Institute of Technology, China)