|Publication type:||Conference paper|
|Type of review:||Peer review (publication)|
|Title:||Evaluating audiovisual source separation in the context of video conferencing|
|Authors :||Inan, Berkay|
Tukuljac, Helena Peic
Pena, Rodrigo C. G.
|et. al :||No|
|Proceedings:||Proceedings Interspeech 2019|
|Conference details:||Interspeech 2019, Graz, Austria, 15-19 September 2019|
|Publisher / Ed. Institution :||International Speech Communication Association (ISCA)|
|Subjects :||Speech enhancement; Source separation; Multi-modal; Aaudiovisual|
|Subject (DDC) :||621.3: Electrical engineering and electronics|
|Abstract:||Source separation involving mono-channel audio is a challenging problem, in particular for speech separation where source contributions overlap both in time and frequency. This task is of high interest for applications such as video conferencing. Recent progress in machine learning has shown that the combination of visual cues, coming from the video, can increase the source separation performance. Starting from a recently designed deep neural network, we assess its ability and robustness to separate the visible speakers’ speech from other interfering speeches or signals. We test it for different configuration of video recordings where the speaker’s face may not be fully visible. We also asses the performance of the network with respect to different sets of visual features from the speakers’ faces.|
|Fulltext version :||Published version|
|License (according to publishing contract) :||Licence according to publishing contract|
|Departement:||School of Engineering|
|Organisational Unit:||Institute of Data Analysis and Process Design (IDP)|
|Appears in Collections:||Publikationen School of Engineering|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.