Recruiting participants

We are now recruiting participants in the South East of England for our first study – please take a look at the advert below and see whether you or someone you know could take part.

Do you have a ‘smart’ technology in your home? Perhaps you weren’t the one who installed it and/or don’t consider yourself an advanced user?

We are looking for people who have one or more devices such as smart thermostats (e.g. Nest or Hive), voice user interfaces (e.g. Alexa or Google Home), smart plugs, motion sensors or automated security systems in their homes. We are working on the design of a new tool that is intended to make it easier to get smart home technologies to do what you want, so we particularly want to speak to people who don’t see themselves as technology experts.

As part of a University of Sussex research project, we would like to come and interview people in their homes. The interview would take around 1 hour, and would involve us asking questions about your current use of technology, and possible future requirements. We’d also like you to try out a prototype voice user interface and would ask for your feedback on the experience to help us improve our design.

Two University of Sussex researchers would visit your home to carry out the interview, and we would pay you £10 to thank you for giving up your time.

If you are interested in finding out more, please email conver-se@sussex.ac.uk

Posted in News, Project updates, Recruitment

VUX Workshop at CHI 2018

At the end of April, I attended CHI 2018 in Montreal, and took part in a workshop on Voice-UX Studies and Design. I submitted a position paper for the workshop, based on our pilot work in the Sussex RDF funded project, Programming in Situ.

The other position papers represented a very broad range of interests and application areas for voice-user interfaces (VUIs), but there were many areas of overlap. One of the most interesting discussions at the workshop, for me, was the extent to which it is reasonable to suggest that users can just ‘speak naturally’ rather than learning the syntax and vocabulary of utterances that can be easily understood by a particular VUI. This is something that many guides to designing for VUIs are quite determined about – there is often a clear suggestion that you should not try to change the way users speak, or teach them how to speak to your VUI (Google explicitly say: “Give users credit. People know how to talk. Don’t put words in their mouth.”) The idea is that users should think of it as a natural conversation rather than perceiving the exchange as as inputs and outputs to a system.

Many of those at the workshop had expertise in conversation analysis, and were not particularly convinced that it is accurate (nor even helpful) to view interactions with VUIs as genuinely conversational.

The team from Nottingham are studying use of VUIs in everyday life. Martin Porcheron is particularly interested in multi-party interactions, and presented a paper at CHI on a study of Alexa use in real homes, using ethnomethodology and conversation analysis. This work highlights some fundamental challenges with VUIs, and Martin’s co-authors discussed some of the ways in which interaction is currently very limited. Stuart Reeves pointed out that categorisation by Alexa (and similar) is quite restrictive. For example, despite having aspirations to provide a genuinely conversational experience, the interpretation of an utterance as a ‘request’ or ‘question’ is determined early on and stuck to. Joel Fischer invited us to consider what is missed by the system, and pointed to the potential for VUIs to be more sensitive to hesitations, elongations and pauses.

Alex Taylor, from City, University of London, further highlighted the extent to which language is rich with other things beside the written word. He is interested in how we shape our talk to be recognised, how we talk differently to VUIs, and how indexicality might work with VUIs.

Despite reservations about the extent to which such interfaces can be genuinely conversational, the workshop organisers were generally in agreement that understanding how conversation works is important for designing VUX.

This does not mean, however, that users who are adept conversationalists already know how to interact with VUIs. One of the most obvious challenges for VUIs is the lack of visibility. Users face huge difficulties in knowing what they can say that is likely to be understood.

Jofish Kaye, from Mozilla, pointed out the many different forms of conversation that exist, and suggested that it may be important to consider the need for the design of a specialist voice programming language. He also made the point that if we are looking for expert users of VUIs, it might be wise to seek input from visually impaired users.

In the CONVER-SE project, we are investigating a variety of approaches for supporting end-user programming interactions with VUIs, including scaffolding, modelling, elicitation, visual prompts, and harnessing natural tendencies towards conversational alignment.

For these more challenging types of interactions, at least, it is clear to us that it will be necessary to teach users how they can be understood. By no means is this the same as insisting that users adopt unnatural ways of speaking – our first study is entirely focused on understanding natural expression in situ and using this to design VUI support for end-user programming activities. However, believing this to remove the need for the interface to support users in learning how they can interact with it would seem quite naïve. The data we looked at during the workshops showed that even for more simple applications, such as playing a game or searching for a recipe, users cannot simply ‘speak naturally’ and expect to be understood. Improving recognition can help, but we also need to develop better ways of revealing to users what kind of utterances VUIs can understand and act on.

There was also a huge amount of interest in VUIs elsewhere at CHI, including many papers, and this packed out Panel on Voice Assistants, UX Design and Research.

Posted in Conferences, Dissemination, Project updates

Design Fiction

We put together this storyboard to communicate a possible future interaction with a system informed by our research on the CONVER-SE project.

Storyboard: Panel 1: Why did they light just turn off, Panel 2: This light turned off because of this rule: 'Switch off living room lamp when no motion detected for 20 minutes'. Would you like to change this rule? Panel 3: Yes, change time to 40 minutes for that rule.

Create your own storyboard.

Posted in Uncategorised

Advisory Board Meeting

We had our first Advisory Board meeting last week, kindly hosted by the V&A.

We’re lucky to have a panel of external experts with diverse skills and experiences, and it was great to all get together for the first time and exchange ideas about the opportunities and challenges for the project.

After I gave a short overview of the project plans and progress so far, we discussed some key issues and questions.

A short summary of the points addressed:

  • Power and control in the home – what happens when only one member of a household has the power to change or override behaviours? How can voice user interfaces offer a way to empower less technical users to query, debug and change the rules defining smart home behaviours?
  • Challenges of speech – how do we deal with the lack of visibility and support discoverability? What is the role for visual support (whilst ensuring visually impaired users are not excluded)?
  • The importance of context – how can we make better use of contextual information (proximity, gesture, identity) to disambiguate requests? Could other elements such as the emotional component of speech be useful?
  • Integration with AI – how could user queries, debugging and changes be supported in hybrid environments that contain both AI and rule-driven systems?
  • Privacy and ethics – what are the specific concerns relating to privacy and ethics in this project, and how are we tackling them? Does GDPR raise additional issues?
  • Wider context of the project – who are the potential beneficiaries of the research and how do we communicate this effectively?

We closed by discussing plans for keeping in touch, and agreed to give a progress report to the whole group at the end of the summer, with additional informal updates in the interim.


Advisory Board Membership

External Members

  • Seb Chakraborty, Chief Technology Officer, Hive (Centrica Connected Home)
  • Corinna Gardner, Senior Curator of Design and Digital at Victoria and Albert Museum
  • Eric Harris, Director of Research, Research Institute of Consumer Affairs (Rica)
  • Claire Rowland, UX Consultant, Author of Designing Connected Products, User Experience for the Consumer Internet of Things
  • Simone Stumpf, Senior Lecturer, Department of Computer Science, City, University of London

CONVER-SE Project Team

  • Kate Howland, Department of Informatics, University of Sussex (Principal Investigator)
  • Jim Jackson, Department of Informatics, University of Sussex (Research Fellow)

 

Posted in Advisory Board, Dissemination, News, Project updates

Project kick-off

We had a busy first month on the project, developing our plans for the first domestic studies, and working on the ethical review documents.

The study materials are now nearly finalised, and work has begun on the toolkit for prototyping conversational interfaces.

We have run pilots with colleagues in the HCT lab at Sussex, and once we have ethical approval we will start recruitment for participants.

We are looking forward to presenting work on earlier pilots at the Voice-based Conversational UX Studies and Design Workshop at the CHI 2018 conference next month, in Montréal.

Here’s sneak preview of our Wizard of Oz prototyping setup:

 

 

 

 

 

 

Posted in News, Pilots, Project updates