Symposium slides now available

On 14th February we ran our end of project symposium at the Attenborough Centre for Creative Arts, University of Sussex.

We had a thought-provoking day, with talks on our work and related research, and an interactive activity where we thought through potential futures for households currently adopting smart home technology.

A more detailed write-up of the event is to follow, but for now – the speaker slides are up.

Posted in Uncategorised

Symposium schedule

Description

Register for a free ticket, which includes lunch and refreshments.
Arrive 9:45 for coffee and 10am start.

Research on smart home technology often only includes a limited set of participants in design and evaluation work. Typically, participants are confident with technology, primarily aged 18-30, disproportionately male, and do not have disabilities. It can be hard to recruit a wider and more representative set of participants to take part in design work, but it is essential if technologies are not to be skewed towards the needs of a limited group.

This symposium presents research that deliberately seeks to include a broader range of users in the design of smart home technologies, and includes talks on academic and consumer research.

Schedule:

9:45 – Coffee and pastries

10:15 – Introduction

10:30 – Simone Stumpf, City University of London – Developing a sensor-assisted toolset to improve quality of life for people with early stages of dementia and Parkinson’s

11:00 – Eric Harris, Research Institute for Disabled Consumers – Being smart about ability

11:30 – Coffee

11:45 – Kate Howland, University of Sussex – Designing for end-user programming for the home through voice

12:15 – Charlotte Robinson, University of Sussex – Designing technology to support mobility assistant dogs

12:45 – Lunch

13:45 – Interactive Workshop – Exploring the futures of smart households – Emeline Brulé, University of Sussex

3:00– Coffee

3:15 – Panel: Technology and children’s voices in and beyond the home

  • Oussama Metatla, University of Bristol
  • Nicola Yuill, University of Sussex
  • Seray Ibrahim, University College London

4:15 – Closing discussion

4:30 – End

Talk abstracts:

Dr. Simone Stumpf: Developing a sensor-assisted toolset to improve quality of life for people with early stages of dementia and parkinson’s

In this talk I will present our work on the Self-Care Advice, Monitoring, Planning, Intervention (SCAMPI) research project. In this project, we are developing a toolset for people living with early stages of dementia and Parkinson’s disease to monitor and improve their quality of life. The toolset includes several artificial intelligence (AI) components that reason over data (e.g. a computational model of activities and quality of life goals, and activity models using low-cost sensors placed in their own homes to help keep track of activities) as well as a user interface to set up and keep track of a quality of life plan. Our toolset was co-designed with people living with dementia and Parkinson’s, and their informal carers in order to produce effective healthcare technology.

Eric Harris: Being smart about ability

Connected consumer products are increasingly being used to create smart home environments which are configured to the occupants’ needs. What are the opportunities for using this technology to the benefit of people with disabilities? What do people with disabilities think about smart homes and what are the challenges for this consumer market?

This talk will present some first-person views about the technology and show use-case examples. It will also highlight where work still needs to be done to better support people with disabilities.

Dr. Kate Howland: Designing voice user interface support for end-user programming in smart homes

This talk will present design-based research on how voice user interfaces (VUIs) can support end-user programming (EUP) tasks in smart home contexts. VUIs, in the form of smart speakers, are one of the most popular smart home control mechanisms, but they provide little support for setting up, querying and changing automated behaviours through voice. We explored the potential for supporting such activities in a mixed-methods domestic design study with 15 participants who had little or no programming experience. We recruited participants who were not typical early adopters and did not self-identify as ‘tech-savvy’. Our participants were in middle to older adulthood and we had a good representation of women, and people with visual and mobility impairments (who are often identified as potential beneficiaries of smart home technology, but are rarely actively involved in smart home design studies). We used semi-structured interviews, Wizard of Oz prototyping and roleplaying to identify opportunities and challenges for using voice interaction to set up and change automated behaviours.

Dr. Charlotte Robinson: Designing technology to support mobility assistant dogs

Assistant dogs are a key intervention to support the autonomy of people with tetraplegia. Previous research on assistive technologies have investigated ways to, ultimately, replace their labour using technology, for instance through the design of smart home environments. However, both the disability studies literature and our interviews suggest there is an immediate need to support these relationships, both in terms of training and bonding. Through a case study of an accessible dog treats dispenser, we investigate a technological intervention responding to these needs, detailing an appropriate design methodology and contributing insights into user requirements and preferences.

Posted in Uncategorised

Symposium announcement

Announcing our end of project symposium – Beyond early adopters: designing smart homes with underrepresented user groups

10am-4:30pm, 14th February 2020, University of Sussex, Attenborough Centre

Register for a free ticket, which includes lunch and refreshments.
Arrive 9:30 for coffee and 10am start. Full schedule to be announced soon.

Research on smart home technology often only includes a limited set of participants in design and evaluation work. Typically, participants are confident with technology, primarily aged 18-30, disproportionately male, and do not have disabilities. It can be very hard to recruit a wider and more representative set of participants, but it is essential if technologies are not to be skewed towards the needs of a limited group.

This symposium presents research that deliberately seeks to include a broader range of users in the design of smart home technologies, and includes talks on academic and consumer research.

Speakers include:

Dr. Kate Howland: Designing Voice User Interface Support for End-User Programming in Smart Homes

This talk will present design-based research on how voice user interfaces (VUIs) can support end-user programming (EUP) tasks in smart home contexts. VUIs, in the form of smart speakers, are one of the most popular smart home control mechanisms, but they provide little support for setting up, querying and changing automated behaviours through voice. We explored the potential for supporting such activities in a mixed-methods domestic design study with 15 participants who had little or no programming experience. We recruited participants who were not typical early adopters and did not self-identify as ‘tech-savvy’. Our participants were in middle to older adulthood and we had a good representation of women, and people with visual and mobility impairments (who are often identified as potential beneficiaries of smart home technology, but are rarely actively recruited in smart home studies). We used semi-structured interviews, Wizard of Oz prototyping and roleplaying to identify opportunities and challenges for using voice interaction to set up and change automated behaviours.

Dr. Simone Stumpf: Developing a sensor-assisted toolset to improve quality of life for people with early stages of dementia and Parkinson’s

In this talk I will present our work on the Self-Care Advice, Monitoring, Planning, Intervention (SCAMPI) research project. In this project, we are developing a toolset for people living with early stages of dementia and Parkinson’s disease to monitor and improve their quality of life. The toolset includes several artificial intelligence (AI) components that reason over data (e.g. a computational model of activities and quality of life goals, and activity models using low-cost sensors placed in their own homes to help keep track of activities) as well as a user interface to set up and keep track of a quality of life plan. Our toolset was co-designed with people living with dementia and Parkinson’s, and their informal carers in order to produce effective healthcare technology.

Eric Harris: Being smart about ability

Connected consumer products are increasingly being used to create smart home environments which are configured to the occupants’ needs. What are the opportunities for using this technology to the benefit of people with disabilities? What do people with disabilities think about smart homes and what are the challenges for this consumer market?

This talk will present some first-person views about the technology and show use-case examples. It will also highlight where work still needs to be done to better support people with disabilities.

Dr. Charlotte Robinson: Designing Technology to Support Mobility Assistant Dogs

Assistant dogs are a key intervention to support the autonomy of people with tetraplegia. Previous research on assistive technologies have investigated ways to, ultimately, replace their labour using technology, for instance through the design of smart home environments. However, both the disability studies literature and our interviews suggest there is an immediate need to support these relationships, both in terms of training and bonding. Through a case study of an accessible dog treats dispenser, we investigate a technological intervention responding to these needs, detailing an appropriate design methodology and contributing insights into user requirements and preferences.

In addition to talks, the day will include a panel on Voice User Interfaces – ‘Can voice help in giving people a voice?’, and an interactive design activity.

Posted in Uncategorised

PPIG 2018

Last month I attended the 29th Annual Workshop of the Psychology of Programming Interest Group. to present a work-in-progress paper on the methods we are using on the CONVER-SE project.

PPIG 2018 was hosted at the Art Workers’ Guild in Bloomsbury, and we were lucky to have members of the guild taking an active role in the event by presenting talks, taking part in activities, and displaying art work in response to some of the topics of the conference. My favourite artefact on display was this ‘Computational Thinking’ knitwear, customised by Rachael Matthews.

Jumper with slogan - "I criticise the pernicious imperialism of computational thinking."

“I criticise the pernicious imperialism of computational thinking.”

There was a great range of talks, and a good mix of new work from PhD students and early career researchers to well-established world-leading researchers. Unfortunately the gender mix was not so good, and I was really surprised to be one of only three female academics presenting across the two and a half days. It’s not something organisers have a huge amount of control over, but perhaps the line-up of invited speakers and keynotes could have been considered more carefully in this regard.

Still, the PPIG community were welcoming and friendly as always, and I had some great discussions over the course of the event, and useful feedback on my talk.

My highlights from the academic work presented were Andrea diSessa and Alan Blackwell’s talks. Andy diSessa spoke about the role of programming in learning science and mathematics, and the design and use of the highly influential Boxer language. I was particularly interested in his discussion of the development of the ‘tick’ model as a conceptually simple but powerful approach for novice programmers, that has many potential applications. He also argued that computational literacy could be a more helpful approach (than computational thinking) in addressing how computers might powerfully change learning in STEM.

Alan Blackwell, whose extensive research on end-user programming has been a great inspiration for the CONVER-SE project, spoke about programming language research as a craft practice. He highlighted how programming itself can be considered a reflective craft and gave examples of how attention investment can be supported through smooth transitions in programming by demonstration and programming by example approaches. His talk also prompted me to look back at Steve Tanimoto’s ‘liveness levels’ and consider more carefully how this applies in end-user programming in the home.

Posted in Conferences, Dissemination, News

Recruiting participants

We are recruiting participants in the South East of England for our first study – please take a look at the advert below and see whether you or someone you know could take part.

Do you have a ‘smart’ device, e.g. Nest, Hive, Phillips Hue, Amazon Echo, or Google Home? Would you be willing to tell us about your experiences as part of a University of Sussex research study (and earn a £10 thank you)?

We are looking for people who have one or more ‘smart’ devices, including smart thermostats, lights or plugs, voice user interfaces, motion sensors or automated security systems in their homes. We are doing research to support the design of tools that will make it easier to get smart home technologies to do what you want, so we particularly want to speak to people who don’t see themselves as technology experts (if you’re the expert in your house, perhaps your partner or housemate would like to take part?)

We would come to interview you in your home. The interview would take around 1 hour and would involve us asking questions about your current use of technology, and possible future requirements. We’d also like you to try out a prototype voice user interface and would ask for your feedback on the experience to help us improve the design.

Two University of Sussex researchers would visit your home to carry out the interview, and we would pay you £10 to thank you for giving up your time.

If you are interested in finding out more, please email conver-se@sussex.ac.uk.

Thank you.

Posted in News, Project updates, Recruitment

VUX Workshop at CHI 2018

At the end of April, I attended CHI 2018 in Montreal, and took part in a workshop on Voice-UX Studies and Design. I submitted a position paper for the workshop, based on our pilot work in the Sussex RDF funded project, Programming in Situ.

The other position papers represented a very broad range of interests and application areas for voice-user interfaces (VUIs), but there were many areas of overlap. One of the most interesting discussions at the workshop, for me, was the extent to which it is reasonable to suggest that users can just ‘speak naturally’ rather than learning the syntax and vocabulary of utterances that can be easily understood by a particular VUI. This is something that many guides to designing for VUIs are quite determined about – there is often a clear suggestion that you should not try to change the way users speak, or teach them how to speak to your VUI (Google explicitly say: “Give users credit. People know how to talk. Don’t put words in their mouth.”) The idea is that users should think of it as a natural conversation rather than perceiving the exchange as as inputs and outputs to a system.

Many of those at the workshop had expertise in conversation analysis, and were not particularly convinced that it is accurate (nor even helpful) to view interactions with VUIs as genuinely conversational.

The team from Nottingham are studying use of VUIs in everyday life. Martin Porcheron is particularly interested in multi-party interactions, and presented a paper at CHI on a study of Alexa use in real homes, using ethnomethodology and conversation analysis. This work highlights some fundamental challenges with VUIs, and Martin’s co-authors discussed some of the ways in which interaction is currently very limited. Stuart Reeves pointed out that categorisation by Alexa (and similar) is quite restrictive. For example, despite having aspirations to provide a genuinely conversational experience, the interpretation of an utterance as a ‘request’ or ‘question’ is determined early on and stuck to. Joel Fischer invited us to consider what is missed by the system, and pointed to the potential for VUIs to be more sensitive to hesitations, elongations and pauses.

Alex Taylor, from City, University of London, further highlighted the extent to which language is rich with other things beside the written word. He is interested in how we shape our talk to be recognised, how we talk differently to VUIs, and how indexicality might work with VUIs.

Despite reservations about the extent to which such interfaces can be genuinely conversational, the workshop organisers were generally in agreement that understanding how conversation works is important for designing VUX.

This does not mean, however, that users who are adept conversationalists already know how to interact with VUIs. One of the most obvious challenges for VUIs is the lack of visibility. Users face huge difficulties in knowing what they can say that is likely to be understood.

Jofish Kaye, from Mozilla, pointed out the many different forms of conversation that exist, and suggested that it may be important to consider the need for the design of a specialist voice programming language. He also made the point that if we are looking for expert users of VUIs, it might be wise to seek input from visually impaired users.

In the CONVER-SE project, we are investigating a variety of approaches for supporting end-user programming interactions with VUIs, including scaffolding, modelling, elicitation, visual prompts, and harnessing natural tendencies towards conversational alignment.

For these more challenging types of interactions, at least, it is clear to us that it will be necessary to teach users how they can be understood. By no means is this the same as insisting that users adopt unnatural ways of speaking – our first study is entirely focused on understanding natural expression in situ and using this to design VUI support for end-user programming activities. However, believing this to remove the need for the interface to support users in learning how they can interact with it would seem quite naïve. The data we looked at during the workshops showed that even for more simple applications, such as playing a game or searching for a recipe, users cannot simply ‘speak naturally’ and expect to be understood. Improving recognition can help, but we also need to develop better ways of revealing to users what kind of utterances VUIs can understand and act on.

There was also a huge amount of interest in VUIs elsewhere at CHI, including many papers, and this packed out Panel on Voice Assistants, UX Design and Research.

Posted in Conferences, Dissemination, Project updates

Design Fiction

We put together this storyboard to communicate a possible future interaction with a system informed by our research on the CONVER-SE project.

Storyboard: Panel 1: Why did they light just turn off, Panel 2: This light turned off because of this rule: 'Switch off living room lamp when no motion detected for 20 minutes'. Would you like to change this rule? Panel 3: Yes, change time to 40 minutes for that rule.

Create your own storyboard.

Posted in Uncategorised

Advisory Board Meeting

We had our first Advisory Board meeting last week, kindly hosted by the V&A.

We’re lucky to have a panel of external experts with diverse skills and experiences, and it was great to all get together for the first time and exchange ideas about the opportunities and challenges for the project.

After I gave a short overview of the project plans and progress so far, we discussed some key issues and questions.

A short summary of the points addressed:

  • Power and control in the home – what happens when only one member of a household has the power to change or override behaviours? How can voice user interfaces offer a way to empower less technical users to query, debug and change the rules defining smart home behaviours?
  • Challenges of speech – how do we deal with the lack of visibility and support discoverability? What is the role for visual support (whilst ensuring visually impaired users are not excluded)?
  • The importance of context – how can we make better use of contextual information (proximity, gesture, identity) to disambiguate requests? Could other elements such as the emotional component of speech be useful?
  • Integration with AI – how could user queries, debugging and changes be supported in hybrid environments that contain both AI and rule-driven systems?
  • Privacy and ethics – what are the specific concerns relating to privacy and ethics in this project, and how are we tackling them? Does GDPR raise additional issues?
  • Wider context of the project – who are the potential beneficiaries of the research and how do we communicate this effectively?

We closed by discussing plans for keeping in touch, and agreed to give a progress report to the whole group at the end of the summer, with additional informal updates in the interim.


Advisory Board Membership

External Members

  • Seb Chakraborty, Chief Technology Officer, Hive (Centrica Connected Home)
  • Corinna Gardner, Senior Curator of Design and Digital at Victoria and Albert Museum
  • Eric Harris, Director of Research, Research Institute of Consumer Affairs (Rica)
  • Claire Rowland, UX Consultant, Author of Designing Connected Products, User Experience for the Consumer Internet of Things
  • Simone Stumpf, Senior Lecturer, Department of Computer Science, City, University of London

CONVER-SE Project Team

  • Kate Howland, Department of Informatics, University of Sussex (Principal Investigator)
  • Jim Jackson, Department of Informatics, University of Sussex (Research Fellow)

 

Posted in Advisory Board, Dissemination, News, Project updates

Project kick-off

We had a busy first month on the project, developing our plans for the first domestic studies, and working on the ethical review documents.

The study materials are now nearly finalised, and work has begun on the toolkit for prototyping conversational interfaces.

We have run pilots with colleagues in the HCT lab at Sussex, and once we have ethical approval we will start recruitment for participants.

We are looking forward to presenting work on earlier pilots at the Voice-based Conversational UX Studies and Design Workshop at the CHI 2018 conference next month, in Montréal.

Here’s sneak preview of our Wizard of Oz prototyping setup:

 

 

 

 

 

 

Posted in News, Pilots, Project updates