Includes bibliographical references (p. 403-468) and index Presents a complete overview of all aspects of audiovisual speech including perception, production, brain processing and technology
mehr
Includes bibliographical references (p. 403-468) and index Presents a complete overview of all aspects of audiovisual speech including perception, production, brain processing and technology
Cover; Audiovisual Speech Processing; Title; Copyright; Dedication; Contents; Figures; Tables; Contributors; Preface; Acknowledgments; Introduction; 1: Three puzzles of multimodal speech perception; 1.1 Introduction; 1.2 Organization; 1.2.1 Assessing audiovisual speech transmission; 1.2.2 The autonomy of audiovisual coherence; 1.2.3 A unimodal parallel; 1.2.4 The puzzle; 1.3 Event perception and speech perception; 1.3.1 Temporal coincidence and phonetic perception; 1.3.2 Whose face, whose voice?; 1.3.3 Whose vowels?; 1.3.4 The puzzle; 1.4 Experience; 1.4.1 Perceptual tuning to a talker
1.4.2 Sensory substitution1.4.3 Natural quality; 1.4.4 The puzzle; 1.5 A conclusion; 1.6 Acknowledgments; 2: Visual speech perception; 2.1 Introduction; 2.1.1 Visual speech perception and visual speech signals; 2.1.2 Phonetic impoverishment in relationship to words; Visemes; Homophenous words; 2.1.3 Spoken word recognition; 2.2 Evaluation of visemes and word homopheny; 2.2.1 Experimental method; 2.2.2 Results; 2.2.3 Summary; 2.3 Phonetic distinctiveness of English words; 2.3.1 Predictability of visual spoken word recognition; 2.3.2 Summary; 2.4 Research strategies; 2.4.1 Summary
2.5 General conclusions2.6 Acknowledgments; 3: Dynamic information for face perception; 3.1 Introduction; 3.2 Motion information for expression perception; 3.3 Motion information for visual speech perception; 3.4 Dynamic information for familiar face recognition; 3.5 Dynamic information for unfamiliar face learning; 3.6 Practical considerations; 3.7 Theoretical interpretations; 3.8 Future research and conclusions; 4: Investigating auditory-visual speech perception development; 4.1 Speech perception is auditory-visual; 4.2 Auditory-visual speech perception
4.3 Methods for investigating development4.4 The ontogenetic development method; 4.4.1 Auditory-visual speech perception in infants; 4.4.2 The development of auditory-visual speech perception; 4.5 The cross-language development method; 4.6 Combined methods; 4.7 Conclusions and an application: automatic speech recognition; 4.8 Acknowledgments; 5: Brain bases for seeing speech: fMRI studies of speechreading; 5.1 Introduction; 5.2 Route maps and guidelines; 5.2.1 Brain regions; 5.2.2 Networks and connections; 5.2.3 Processing streams; 5.2.4 Implications for speechreading and audiovisual speech
5.3 Silent speechreading and auditory cortex5.3.1 How specific is the activation?; 5.3.2 Controversy: which parts of auditory cortex are activated by silent seen speech in hearing people?; 5.3.3 Still and moving speech: dual routes for silent speechreading?; 5.3.4 The role of the superior temporal sulcus; 5.3.5 STSp and audiovisual speech: a binding function?; 5.4 Audiovisual integration: timing; 5.5 Speechreading: other cortical regions; 5.6 Speechreading in people born deaf; 5.7 Conclusions, directions; 5.8 Acknowledgments; 5.9 Appendix: Glossary of acronyms and terms
5.9.1 Functional imaging techniques (and see Cabeza and Kingstone 2006 for further details)