Action | Key |
---|---|
Play / Pause | K or space |
Mute / Unmute | M |
Select next subtitles | C |
Select next audio track | A |
Show slide in full page or toggle automatic source change | V |
Seek 5s backward | left arrow |
Seek 5s forward | right arrow |
Seek 10s backward | shift + left arrow or J |
Seek 10s forward | shift + right arrow or L |
Seek 60s backward | control + left arrow |
Seek 60s forward | control + right arrow |
Decrease volume | shift + down arrow |
Increase volume | shift + up arrow |
Decrease playback rate | shift + comma |
Increase playback rate | shift + dot or shift + semicolon |
Seek to end | end |
Seek to beginning | beginning |
In 2019, media scandals raised awareness about privacy and security violations in Conversational User Interfaces (CUI) like Alexa, Siri and Google. Users report that they perceive CUI as “creepy” and that they are concerned about their privacy. The General Data Protection Regulation (GDPR) gives users the right to control processing of their data, for example by opting-out or requesting deletion and it gives them the right to obtain information about their data. Furthermore, GDPR advises for seamless communication of user rights, which, currently, is poorly implemented in CUI. This talk presents a data collection interface, called Chatbot Language (CBL) that we use to investigate how privacy and security can be communicated in a dialogue between user and machine. We find that conversational privacy can affect user perceptions of privacy and security positively. Moreover, user choices suggest that users are interested in obtaining information on their privacy and security in dialogue form. We discuss implications and limitations of this research.
When subscribed to notifications, an email will be sent to you for all added annotations.
Your user account has no email address.