Key | Action |
---|---|
K or space | Play / Pause |
M | Mute / Unmute |
C | Select next subtitles |
A | Select next audio track |
V | Show slide in full page or toggle automatic source change |
left arrow | Seek 5s backward |
right arrow | Seek 5s forward |
shift + left arrow or J | Seek 10s backward |
shift + right arrow or L | Seek 10s forward |
control + left arrow | Seek 60s backward |
control + right arrow | Seek 60s forward |
shift + down arrow | Decrease volume |
shift + up arrow | Increase volume |
shift + comma | Decrease playback rate |
shift + dot or shift + semicolon | Increase playback rate |
end | Seek to end |
beginning | Seek to beginning |
Research has shown that trust is an essential aspect of human-computer interaction, determining the degree to which the person is willing to use a system. Predicting the level of trust that a user has on the skills of a certain system could be used to attempt to correct potential distrust by having the system take relevant measures like, for example, explaining its actions more thoroughly. In our research project, we have explored the feasibility of automatically detecting the level of trust that a user has on a virtual assistant (VA) based on their speech. For this purpose, we designed a protocol for collecting speech data, consisting of an interactive session where the subject is asked to respond to a series of factual questions with the help of a virtual assistant, which they were led to believe was either very reliable or unreliable. We collected a speech corpus in Argentine Spanish and found that the reported level of trust was effectively elicited by the protocol. Preliminary results using random forest classifiers showed that the subject’s speech can be used to detect which type of VA they were using with an accuracy up to 76%, compared to a random baseline of 50%.