Vadim Kimmelman. Private photo
Vadim Kimmelman. Private photo

Vadim Kimmelman (University of Bergen) will give a talk entitled ‘Analyzing nonmanual markers in sign languages with computer vision’

Abstract

Sign languages use body and head movements, as well as facial expressions to convey some lexical and much of grammatical information. While a lot of research has been devoted to studying nonmanual markers in different sign languages, a detailed analysis of the formal side of nonmanual markers has been very difficult and time-consuming, or it required expensive equipment – until now. Recent breakthroughs in computer vision allow reliable identification and tracking of the body, the hands, the head, and the facial features in video recordings (see, e.g., the OpenPose software: https://github.com/CMU-Perceptual-Computing-Lab/openpose) OpenPose is already actively used in the field of sign language recognition and automatic translation. However, being able to track the location of a body part in a video does not mean that the output can be used directly for linguistic analysis. I will present some pilot attempts of analyzing nonmanual markers of question marking in Kazakh-Russian Sign Language that I conducted with colleagues in Kazakhstan and Russia. I will show that the use of computer vision in sign language research is very promising but also requires a lot of preparatory research before it can be used by everyone.

Zoom

This digital seminar will be held in Zoom. For access to the meeting, please contact Hatice Zora or Dmitry Nikolaev in advance:
hatice@ling.su.se
dmitry.nikolaev@ling.su.se

Photo: Shutterstock
Photo: Shutterstock