My research focuses on audio and visual signal processing, robotic perception and machine learning. I developed microphone array techniques for sound enhancement, source localizaiton and blind source separtation. I developed audio-visual signal processing techniques for acoustic sensing from flying robots (mini-drones). I applied machine learning techniques to human activity and context recogntiion from wearable sensors (motion, GPS, sound and image). This page shows demos of my research outcomes.
- Pseudo-determined BSS for ad-hoc microphone networks
This demos performs blind source separation from crowsourced audio recording in a cocktail party environment (with 8 microphones and 4 speakers).
- BSS + beamforming
The demos extracts a target speaker in an extremely noisy cocktail party environments (SNR < -10 dB) by combining beamforming and blind source separation.
This demos shows blind source separation results (with 4 microphones and 4 speakers) obtained by the MBMC permutation ambiguity alignment algorithm.