Firstly, I would like to thank the organisers, for inviting me to speak today. This is the first time ever that I have attended the HCI conference, despite always wanting to | since my undergraduate years. It is excellent now I am at doctoral level and a Chartered IT Professional | that I get to tell you a bit about my research questions and approach | and how it fits into this conference.
How many of you have ever had to say, ‘Sorry, but I was only teasing’? Probably everyone has at some point in their lives, whether in their younger years as a child, or as adult playing games with a child.
Emotions and play go hand in hand, and there is often a fine line between which play leads to joyous emotions, and that which leads to sadness and the feeling of being upset. But how do we know the difference?
HCI literature is abound these days with definitions of emotion, but the working definition I have in my doctoral thesis is, ‘an expression of a feeling through either internally or externally orientated bodily means that motivates a further response from oneself or others’.
What I am meaning by this is that the purpose of an emotion is to lead to further action, either from the person experiencing it, or the person to whom they express it. So there are tell tale signs whether or not someone is enjoying a playful activity, by the emotional expressions in their face, the tone of their voice, and content of their speech. All too often however, we choose to ignore it as we enjoy the escapism that comes from play.
For some this is a choice, but not all. People with emotion recognition difficulties (that is, ERDs) have problems in this area. A simplified definition I am using of ERD for my doctorate is ‘an impairment in ones ability to recognise and respond to the emotions of others’, but equally it can include impairment in ones ability to recognize ones own emotions.
People with ERDs such as autism have difficulty taking part in games and are easily upset when they do not understand imaginative play. But also, they have difficulties knowing when to stop themselves when they engage in non-imaginative play because they can’t read the emotional states of others. It is not a choice for them whether or not they ignore the affective states of others, as they have little control in this area. Some have said that this is so much of a problem that we should try to teach ERDs these skills. But I say no. Why should someone with an ERD have to become average, just because others want them to be ‘normal’? They shouldn’t have to use up their usually excellent systemising skills in order to do things they weren’t born to do.
So I propose a different solution: A system I call Vois – The Versatile Ontological Imitation System.
Vois would place the processing of facial, speech and dialogue affect information in others onto a server, so that the person with the ERD can respond easily within a social situation without overloading their cognitive functioning. Facial and speech affect recognition algorithms are already in the public domain, and my doctorate would develop the final piece in the jigsaw – the dialogue affect recognition algorithm. In addition I will develop a conversational agent that uses this information to provide recommended behaviours to people with ERDs during participation in a conversation, and also when they are reflecting on a conversation they had in the past.
I have decided to use the approach for designing persuasive e-learning systems that I presented to the BCS-sponsored ITA05 conference to structure my methodology.
The user experience analysis stage, while quantitative will provide useful information to form detailed descriptions of the techno-cultures with emotion recognition difficulties, who would use Vois. This could be considered ethnography, which literally means describing people through writing, and is distinct from ‘traditional’ systems analysis in that it focuses on the participants and the aspects of their culture that affect the way they interact with the system, as opposed to simply assessing the systems data, its structure and its processing.
The technical development study would transform the results of the UXA into a workable product as well as define the functions of Vois. Using a function reduction methodology it would serve to determine the development approach through acting as a quality function development. Specifically it would use card sorting based on Q-methodology to identify the mediating artefacts set out in the methodology I proposed at ITA05.
The prototyping and clinical testing study would involve developing Vois into a workable system and testing its safety before trialling it with potential customers. It would develop a pre-production prototype and log actual use of it in user trails to test how persuasive it is with people with ERDs.
The Market and Competitor Analyses stages would put the new application into context and provide the basis for developing a business plan in order to commercially exploit the application. They would use data from the UXA to perform a cluster analysis, and develop a SWOT profile comparing it to other systems.
I hope you will agree that even with Vois acting as an aide for people with emotion recognition difficulties it is based on the assumption that it should not be for disabled people with ERDs to change but for society to adapt to support them.
Vois would enable this, especially if it is funded by government, by taking the cognitive load off people with ERDs and putting it on to a computer server. However, even with support systems like Vois in place, those dealing with people with emotion recognition difficulties should still have to consider their approach to them and not act inappropriately so as to create greater anxiety.