1. Multimodal communication

Key ILSP researchers

  • Dr. Harris Papageorgiou
  • Dr. Spyros Raptis

Past activities

Social interactions are multimodal and, thus, their analysis is accurate and representative of the message sent only if several behavioural cues are explored jointly . This is true for most behaviours given that unimodal, nonverbal behavioral patterns are quite ambiguous and human perception of these patterns is multimodal and unified rather that unimodal and segregated .Therefore, the use of auditory and visual signals captured by electronic devices is not enough to capture and understand behavior; we need these signals to be somehow coupled with human perception. It is the human data accompanying these multimodal signals that will provide us with information of perceptual saliency of a signal in a given context, effective perceptual integration of signals in time, signal discrepancies tolerated by the perceptual system etc. ILSP proposes its contribution in both the cognitive and computational aspects of this task.

The research team has a strong background and prolonged experience in multimodality research and is well established regionally. However, it needs to expand its focus and closely follow-up with recent developments and the current state-of-the-art in fields of particular interest. The focus lies in the areas of:

  • the multimodal description, analysis and recognition of a speaker’s attitudes and emotions during various aspects of talk-in interaction, such as spontaneous conversations, interviews etc. and the connection with the speech that is co-expressed with them, as well as their intensity and dynamics.
  • high-quality speech synthesis with regard to emotional and expressive speech synthesis, multimodal speech synthesis with audio-visual output, and voice transformation. To effectively address these issues, effort is invested in shifting from today’s predominant methods to new paradigms (including statistical/parametric synthesis) that provide the required versatility and manipulability to cope with the above tasks.
  • cognitive experiments on multimodal perceptual binding and multimodal temporal binding of social interaction: optimal integration of the unimodal signals of a multimodal behavior, optimal function and relationship of each unimodal signal for the successful integration of a multimodal behavior, and optimal use of perceptual and temporal binding principles for generating and predicting behavioral patterns.

Related news

To Iνστιτούτο Επεξεργασίας του Λόγου και το Ερευνητικό Κέντρο Αθηνά διοργανώνει Ανοιχτή Εκδήλωση στην Ξάνθη

«25 χρόνια Έρευνας και Καινοτομίας στις Τεχνολογίες Γλώσσας, Πολιτισμού και Περιεχομένου» είναι ο τίτλος της Ανοικτής Εκδήλωσης που διοργανώνει το Ινστιτούτο Επεξεργασίας του Λόγου (ΙΕΛ) -ένα από τα Ινστιτούτα του Ερευνητικού Κέντρου «Αθηνά»- στην Ξάνθη, την Δευτέρα 25 Μαΐου 2015 (18.00 – 21.00), στο ξενοδοχείο Elisso. Το ακαδημαϊκό/επιστημονικό, επιχειρηματικό, εκπαιδευτικό και ευρύτερο κοινό της Ξάνθης [...]

Posted in Activities, Info Days, Language learning and learning disabilities, Multimedia processing, Multimodal communication, Open Days, Text mining | Comments Off

A Visit to UPEM, the Universite Paris- Est Marne la Vallee

Dr. Angeliki Fotopoulou, Director of Research at ILSP, visited the Universite Paris- Est Marne la Vallee and extended collaboration on the areas of: frozen expressions with light verb, properties documentation, degree of fixity on verbal expressions and emotions. In collaboration with Prof. P. Kyriakopoulou they continued the elaboration of a Sentiments Grammar in a Unitex [...]

Posted in Activities, Multimodal communication, Scientific Presentations, Visits | Comments Off

Info Day: Language and Content Processing Technologies @ ILSP / “Athena” RIC

Το Ερευνητικό Κέντρο “Αθηνά” έχει τη χαρά να συμμετάσχει στο Athens Science Festival 2015 από τις 17 έως τις 22 Μαρτίου στην Τεχνόπολη του Δήμου Αθηναίων. Το Athens Science Festival έχει στόχο να καταστήσει τις επιστήμες πιο φιλικές στο ευρύ κοινό και να παρέχει ερεθίσματα και κίνητρα σε άτομα κάθε ηλικίας να ανακαλύψουν την επιστήμη [...]

Posted in Activities, Info Days, Language learning and learning disabilities, Multimedia processing, Multimodal communication, Priority Research Axes, Project news, Text mining | Comments Off

Short-term incoming visit from researchers of the Université Paris-Est Marne-la-Vallée LIGM

Matthieu Constant gave a talk on December 12 on the integration of Multiword Expression recognition in statistical syntactic parser, focusing on the description of various integration strategies (attached presentation). He worked with Dr Aggeliki Fotopoulou on the lexical and semantic classification of multiword expressions, focusing on the detection of the continuum between almost free expressions [...]

Posted in Activities, Multimodal communication, Visits | Comments Off

Workshop “Understanding and Modeling Multiparty, Multimodal Interactions (UM3I)” in the context of the 16th ACM International Conference on Multimodal Interaction (ICMI 2014)

The LangTERRA Workshop on “Understanding and Modeling Multiparty, Multimodal Interactions (UM3I)” took place at Boğaziçi University in Istanbul, Turkey, on November 16 2014, in the context of the ACM 16th International Conference on Multimodal Interaction (ICMI 2014). The ICMI Conference is one of the most significant venues related to Multimodal Communication and activities encompassed in [...]

Posted in Multimodal communication, Workshops | Comments Off