Difference between revisions of "EdTechforDHH"

From PublicWiki
Jump to: navigation, search
Line 1: Line 1:
 +
[http://jdsde.oxfordjournals.org/cgi/reprint/4/4/270]
 +
F Dowaliby and H Lang. '''Adjunct aids in instructional prose: a multimedia study with deaf college students.''' Journal of Deaf Studies and Deaf Education, Vol 4, 270-282.
 +
 +
Comments: tested effects on learning of (1) text only, (2) text and content movies, (3) text and sign movies, (4) text and adjunct questions, and (5) all of these together (full condition).  Nothing matters more than student participation (questions).
 +
 
[http://portal.acm.org/citation.cfm?id=1168987.1169057]
 
[http://portal.acm.org/citation.cfm?id=1168987.1169057]
 
J. Schull. '''An extensible, scalable browser-based architecture for synchronous and asynchronous communication and collaboration systems for deaf and hearing individuals.''' In Assets ’06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 285–286, New York, NY, USA, 2006. ACM Press.
 
J. Schull. '''An extensible, scalable browser-based architecture for synchronous and asynchronous communication and collaboration systems for deaf and hearing individuals.''' In Assets ’06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 285–286, New York, NY, USA, 2006. ACM Press.

Revision as of 23:54, 28 November 2007

[1] F Dowaliby and H Lang. Adjunct aids in instructional prose: a multimedia study with deaf college students. Journal of Deaf Studies and Deaf Education, Vol 4, 270-282.

Comments: tested effects on learning of (1) text only, (2) text and content movies, (3) text and sign movies, (4) text and adjunct questions, and (5) all of these together (full condition). Nothing matters more than student participation (questions).

[2] J. Schull. An extensible, scalable browser-based architecture for synchronous and asynchronous communication and collaboration systems for deaf and hearing individuals. In Assets ’06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 285–286, New York, NY, USA, 2006. ACM Press.

[3] R. Kheir and T. Way. Inclusion of deaf students in computer science classes using real-time speech transcription. SIGCSE Bull., 39(3):261–265, 2007.

[4] Donald H. Beil. Tablet PC – The New New Thing – Demonstration, and Implications in Deaf Education. Instructional Technology and Education of the Deaf Symposium, NTID, June 23-27, 2003

[5] Western Pennsylvania School for the Deaf boosts visual learning with SMART Board interactive whiteboards

[6] Lisa B. Elliot, Michael S. Stinson, Barbara G. McKee, Victoria S. Everhart and Pamela J. Francis. College Students' Perceptions of the C-Print Speech-to-Text Transcription System. Journal of Deaf Studies and Deaf Education 6:4, pg 285-298. 2001.

      • Signing Avatars for Educational Purposes:

[7] [8] [9] Judy Vesel. Signing Science! Learning & Leading with Technology ISTE (International Society for Technology in Education) May 2005, Volume 32 Number 8, pg 30-35.

[10] Ron Cole, Dominic W. Massaro, Jacques de Villiers, Brian Rundle, Khaldoun Shobaki, Johan Wouters, Michael Cohen, Jonas Beskow, Patrick Stone, Pamela Connors, Alice Tarachow, Daniel Solcher. New tools for interactive speech and language training: Using animated conversational agents in the classrooms of profoundly deaf children.

      • Less relevant stuff

[11] Kathleen F. McCoy and Lisa N. Masterman (Michaud). 1997. A tutor for teaching English as a second language for deaf users of American Sign Language. In Proceedings of Natural Language Processing for Communication Aids, an ACL/EACL97 Workshop, pages 160--164, Madrid, Spain, July.