Key | Action |
---|---|
K or space | Play / Pause |
M | Mute / Unmute |
C | Select next subtitles |
A | Select next audio track |
V | Show slide in full page or toggle automatic source change |
left arrow | Seek 5s backward |
right arrow | Seek 5s forward |
shift + left arrow or J | Seek 10s backward |
shift + right arrow or L | Seek 10s forward |
control + left arrow | Seek 60s backward |
control + right arrow | Seek 60s forward |
shift + down arrow | Decrease volume |
shift + up arrow | Increase volume |
shift + comma | Decrease playback rate |
shift + dot or shift + semicolon | Increase playback rate |
end | Seek to end |
beginning | Seek to beginning |
In 2019, media scandals raised awareness about privacy and security violations in Conversational User Interfaces (CUI) like Alexa, Siri and Google. Users report that they perceive CUI as “creepy” and that they are concerned about their privacy. The General Data Protection Regulation (GDPR) gives users the right to control processing of their data, for example by opting-out or requesting deletion and it gives them the right to obtain information about their data. Furthermore, GDPR advises for seamless communication of user rights, which, currently, is poorly implemented in CUI. This talk presents a data collection interface, called Chatbot Language (CBL) that we use to investigate how privacy and security can be communicated in a dialogue between user and machine. We find that conversational privacy can affect user perceptions of privacy and security positively. Moreover, user choices suggest that users are interested in obtaining information on their privacy and security in dialogue form. We discuss implications and limitations of this research.