Key | Action |
---|---|
K or space | Play / Pause |
M | Mute / Unmute |
C | Select next subtitles |
A | Select next audio track |
V | Show slide in full page or toggle automatic source change |
left arrow | Seek 5s backward |
right arrow | Seek 5s forward |
shift + left arrow or J | Seek 10s backward |
shift + right arrow or L | Seek 10s forward |
control + left arrow | Seek 60s backward |
control + right arrow | Seek 60s forward |
shift + down arrow | Decrease volume |
shift + up arrow | Increase volume |
shift + comma | Decrease playback rate |
shift + dot or shift + semicolon | Increase playback rate |
end | Seek to end |
beginning | Seek to beginning |
In this talk we present a brief introduction to acoustic sensor networks and to feature extraction schemes that aim to improve the privacy vs. utility trade-off for audio classification in acoustic sensor networks. Our privacy enhancement approach consists of neural-network-based feature extraction models which aim to minimize undesired extraneous information in the feature set. To this end, we present adversarial, siamese and variational information feature extraction schemes in conjunction with neural-network-based classification (trust) and attacker (threat) models. We consider and compare schemes with explicit knowledge of the threat model and without such knowledge. For the latter, we analyze and apply the variational information approach in a smart-home scenario. It is demonstrated that the proposed privacy-preserving feature representation generalizes well to variations in dataset size and scenario complexity while successfully countering speaker identification attacks.