The first lecture will be held on October 28th at 14.15.
The lecture has the following format:
For further information, please contact Prof. Dr. Emanuël Habets.
Speech is at the core of human communication and increasingly central to our interaction with technology. From voice assistants and teleconferencing to hearing aids, security applications, and immersive media, speech technologies must perform robustly in real-world acoustic environments. These environments are often far from ideal: noise, reverberation, and interfering sources can severely degrade the quality and intelligibility of speech signals. At the same time, advances in machine learning and signal processing have opened new opportunities for creating, modifying, and analyzing speech in powerful ways.
This lecture provides a comprehensive introduction to advanced speech processing, covering both classical and modern neural approaches. Topics include:
The lecture combines theoretical foundations, algorithmic insights, and practical demonstrations. Students will gain an understanding of both classical methods and cutting-edge neural approaches, and their application in real-world scenarios.
Target audience: This lecture is designed for graduate students and researchers interested in speech and audio technology. By the end of the lecture, participants will have a strong foundation to understand, design, and critically evaluate methods in advanced speech processing.
The lecture slides can be downloaded on StudOn.
Further audio-related courses offered by the AudioLabs can be found at: