Publication type: Conference paper
Type of review: Peer review (publication)
Title: Evaluating audiovisual source separation in the context of video conferencing
Authors: Inan, Berkay
Cernak, Milos
Grabner, Helmut
Tukuljac, Helena Peic
Pena, Rodrigo C. G.
Ricaud, Benjamin
et. al: No
DOI: 10.21437/Interspeech.2019-2671
Proceedings: Proceedings Interspeech 2019
Page(s): 4579
Pages to: 4583
Conference details: Interspeech 2019, Graz, Austria, 15-19 September 2019
Issue Date: 2019
Publisher / Ed. Institution: International Speech Communication Association (ISCA)
Language: English
Subjects: Speech enhancement; Source separation; Multi-modal; Aaudiovisual
Subject (DDC): 
Abstract: Source separation involving mono-channel audio is a challenging problem, in particular for speech separation where source contributions overlap both in time and frequency. This task is of high interest for applications such as video conferencing. Recent progress in machine learning has shown that the combination of visual cues, coming from the video, can increase the source separation performance. Starting from a recently designed deep neural network, we assess its ability and robustness to separate the visible speakers’ speech from other interfering speeches or signals. We test it for different configuration of video recordings where the speaker’s face may not be fully visible. We also asses the performance of the network with respect to different sets of visual features from the speakers’ faces.
Fulltext version: Published version
License (according to publishing contract): Licence according to publishing contract
Departement: School of Engineering
Organisational Unit: Institute of Data Analysis and Process Design (IDP)
Appears in collections:Publikationen School of Engineering

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.