Digital health voice assistant for dementia caregivers
The UK estimates that 1.92 million people will have dementia by 2040. The socio-economic impact of this disease will continue to put a burden on society, specially on individuals acting as informal care givers (ICGs).
– How might we reduce ICGs' pain points in their daily care duties?
– How might we tackle their sense of isolation and depression?
– How might we help them get prepared for unexpected situations?
Along with the engineering team from visyon360.com I developed a VUI prototype for Google Home Assistant which uses a custom relevance model scrapper to provide essential palliative care information to ICGs such as recommended foods and drugs to treat symptoms
According to a study by The Economist Intelligence Unit, dementia can cost a patient an average of £40.7k per year in the UK.
It is very common for family members to act as informal caregivers (ICGs), which may take a financial, physical and emotional toll on them as the patient's condition evolves. For example, it is estimated that ICGs in the UK are on average £18.2K worse off as a result of lost income, productivity and opportunities.
In severe circumstances caregivers have to work with the patient almost round the clock. This often leads to physical and mental fatigue, leaving carers with little or no time for themselves. Social isolation and depression soon follows, thus putting a strain on relationships with family and friends, which in turn may affect the quality of care given to the patient.
There are a number of supporting services out there for dementia patients but very little is provided to caregivers themselves. Preliminary market research suggested that supporting tools are few and far between. Hence, the topic offered an interesting user-centric challenge.
After carrying out desk research focused on reports, academic papers, and testimonials – including support groups on social media channels – I formulated the basis for a qualitative interview script with caregivers.
I conducted 6 interviews over the phone with informal caregivers to obtain a clear picture about their situation, taking into consideration the varying stages in the condition and the different levels of care thus required. In addition to that I also interviewed 2 senior management professionals responsible for running care homes in order to get an all-encompassing, utilitarian perspective on the caregiving process from professionals.
In addition to desk research and interviews I also carried out a quantitative survey with users in the UK, Germany and Spain, to gain more insight into questions related to the use of technology by carers.
At the end of the survey, I encouraged respondents to describe their daily activity in the form of a summarised diary, which provided valuable information into user's pain points, obstacles and more when carrying out their daily activities.
The consultation with users has not stopped here though. Throughout the development process there have been one-off talks with different caregivers in the three regions surveyed, allowing me to complement the full picture.
Space Saturation and Affinity Mapping
The data resulting from the Empathy phase was organised into an information wall. We iterated with it twice, filtering information more and more until we were able to organise a more cohesive affinity map. In the first iteration I brought other UX designers to help with mapping the more dense data set. That included my findings in the primary immersion, user interviews and online survey.
The second round received the contribution of a creative technology strategist who helped me to consolidate the affinity map into workable categories: 'demographics'; 'problems/issues/challenges/pain-points'; 'needs'; and 'habits and behaviour' .
Defining the problem space
From the affinity map we identified three recurrent critical areas that we felt should be the focus of our attention henceforth – preparedness, support, and isolation.
Because of the unquestionable frequency with which these three themes appeared in every interview, I decided that the problem space should revolve around them.
I need to find relevant information that helps me be prepared for sudden changes in my patients' condition and how I respond to it
I need to get help and support on a regular basis from friends and family. It would be impossible to do this on my own.
I need emotional support and share my experience with others who might be in the same situation and can understand
Informal caregivers need a way to get continuous support while caring for a loved one with dementia because the demands of the role leave no time for anything else, leading to isolation, as well as physical and mental fatigue.
I believe that by developing a multi-modal service based on voice interactions for informal caregivers I will reduce friction, help them complete tasks more quickly, and alleviate their sense of isolation.
Goal-oriented user persona
There are myriad of circumstances in which dementia patients are cared for. The data suggests that within the many situations involving caregiving and dementia the most common is of adult offspring caring for their elderly parents and/or spouses looking after their partners.
The group and gender split can be very heterogeneous indeed. However, during the research stage the majority of cases I came across were of female carers (either spouse or children) looking after the elderly one. In the case of spouses, they were looking after their husbands. As for female children they were looking after either parent with no evident majority between men and women.
Therefore, for the purpose of building a workable primary persona, I selected a mid-aged female carer looking after her husband as this seemed to be a slightly more prevalent group amongst the surveyed sample.
User journey storyboards
In addition to the information provided by the primary user persona I revisited the interviews and user diaries in order to cobble together a more complete journey throughout the day for caregivers. More importantly, I was identifying the key moments of truth when they faced barriers or experienced any form of friction when trying to go about their chores.
Overall, this process naturally led us from the problem space definition to the start of the ideation phase. That process saw the UX collaborators speculate on technological solutions for the problems presented, leading subsequently to a proper ideation session.
When teaming up with other UX designers I facilitated and contributed to the ideation session. My technique of choice was the 'Crazy 8's' followed by 'Build, Save, Kill' and, finally, 'dot-voting' to select the best solution.
The absolute majority picked the Voice User Interface (VUI) solution, complemented by a Graphical User Interface (GUI) that, together, would provide key functions to assist informal caregivers in some of their regular tasks.
Voice user interface dialogue sample
The next step in the process comprised the development of the voice agent starting with a happy-path dialogue flow, which, at this point, I envisaged as being built from scratch. In theory, that would allow a high degree of control over its key characteristics: tonality; language/vocabulary; voice-talent; and high-level fulfilment thresholds.
It also accounted for three use cases, or functions, which were considered at the beginning of the ideation process, including 'on-boarding', 'finding information', 'connecting with friends/seek support'.
User flow (graphical user interface)
If voice was considered the primary means of interaction, a graphical interface would work as a complement for those situations where sound might not be convenient or effective. Taking into account that the nature of interactions with this system takes the form of multiple micro-moments, and that the majority, but not all of them, are voice based, I concluded that a multi-modal system that caters for a myriad of situations – some of which more conducive of voice interactions, and some others better delivered through a screen – then we had to think about how to design for both and be mindful of surface switch capabilities.
As an example, the on-boarding happy path below is based on the use of a smartphone device as the first point of interaction with the system. But in many occasions, the interaction will start with voice and continue on screen.
Guerilla testing with GUI Lo-Fi prototypes
In the next step I quickly translated the core functions that we envisaged for the agent, and attributed them to the smartphone surface to test those assumptions quickly. I used Lo-Fi paper prototypes of key screens along the happy-path to collect feedback from non-specific users.
My primary goal was to uncover issues related to fundamentals in usability as well as heuristics pertaining to mobile initially. My chosen method was guerilla testing for its speedy way of gathering feedback and insights on the spot.
Despite the fact that volunteers didn't fit the persona profile with exactitude, they nevertheless pointed me to a couple of initial design problems. I run these guerilla sessions with 6 different volunteers, following a script and audio-recording for later consultation. I also took observational notes on the verbal cues and emotional reactions elicited by the experience of those volunteers, which provided invaluable feedback.
GUI wireframes (first iteration)
The feedback gathered during the Lo-Fi prototype testing sessions was fully implemented in the GUI wireframes' designs. The set below shows a happy-path whereby first-time users interact with the system, choosing to fully register and provide further personal details in order to get a tailored experience from the voice-agent surface.
Voice user interface prototype
Because NoraCare is a multi-modal system with particular emphasis on voice interactions I focused my efforts on firstly building a working prototype using an existing platform with an already large user base.
The rationale was that instead of attempting to build a system from scratch, which would require a substantial upfront investment, we would favour an open source platform. Therefore, along with developers we decided to adopt Dialogueflow by Google as a more efficient way to get up and running quickly with a prototype and start with preliminary tests using Google Home and Google Mini as devices.
Nevertheless, the use cases we are trying to cover required more than what Dialogueflow software architecture provides by default. In order to fulfil specific intents in the area of dementia topics we had to devise our own 'Custom Relevance Model' webhook that would work as a 'filter' to rather open Google CSE fulfilment.
The solution, therefore, included a form of assisted learning that introduced over riding tools combined with the ingesting of content manually in order to nudge the agent towards more accurate fulfilment.
Further user testing
Having built a voice agent prototype will enable us to carry out further user research with informal caregivers and collect further insights into their experience in that role. We are hoping to identify needs not yet uncovered by the first user research. In doing so we will look at partnering with dementia oriented organisations who can provide the database needed to train the agent further and deliver the best support to caregivers.
Concept Vilmar Pellisson
UX Design Vilmar Pellisson
Full stack development Carlos Calvo
Research Vilmar Pellisson
Branding Vilmar Pellisson
Promo video Vilmar Pellisson, Karin Haussmann