In this thesis, we focus on developing behaviours for socially interactive agents (SIAs). The context in which the agent is used is a self-regulated learning system for children. We focus on personalising learning objectives and interaction content within an intelligent tutoring
...
In this thesis, we focus on developing behaviours for socially interactive agents (SIAs). The context in which the agent is used is a self-regulated learning system for children. We focus on personalising learning objectives and interaction content within an intelligent tutoring system (ITS). We envision a system where children can train diabetes selfmanagement knowledge and skills independent of space and time and in collaboration with the health care professional, legal caretakers, and a SIA. To facilitate long-term interaction with such a system, relevant learning content and appropriate ‘intelligent’ social behaviour of the SIA are necessary. The envisioned system was developed within the Horizon 2020 PAL-project and evaluated in an iterative design process. The main contributions of the research described in this thesis are: insights into the behaviour design for a NAO robot and its virtual avatar, and the formalisation of learning objectives facilitating personalised learning content.
Most studies on SIA behaviour focus on the design of emotional expressions or implement roles (e.g., peer or tutor) that were not validated for perception. We argue that strategical pedagogical interaction style (i.e., style purposefully selected based on knowledge about the user, task and context such as done by teachers in traditional classroom settings) is necessary but not yet sufficiently studied to design meaningful interactions that surpass the initial novelty and fun. Further, we argue that learning content must be relevant to the child’s needs and developmental stage. These two challenges are the subjects of study in the two parts of this thesis.
The main research question addressed in part I is: How to design SIA behaviours that express different pedagogical styles and what is the effect on learning outcomes? We answer this question in the four included chapters.
In a systematic review we focus on non-verbal expressions by parameter-based manipulations of bodily shape and motion of humanoid robots and virtual agents, and how these manipulations are perceived by humans. We present a comprehensive review of peer-reviewed published articles and analyse and summarise the available work. Research in this field is multidisciplinary and shows a large variety in concept definitions, behavioural manipulations and evaluation methodologies. We developed the TAXMOD taxonomy as a starting point to develop a shared understanding and interpretation of research
objectives and outcomes, and to formulate a road-map. We applied TAXMOD to position and compare research, and to explicate progress in this area. We found structural support for the fact that some social signals can be displayed by behaviour manipulation in the form of posture- or motion modulation or designed key expressions (fixed behaviours with a specific target expression). Key findings include: 1) the expression of personality traits using virtual or robot bodies is limited to the trait extraversion; 2) the expression of social dimensions such as warmth, competence and dominance is possible, but only when using the whole body, and more research is needed to disentangle individual effects on friendliness, competence and dominance; 3) the expression
of emotion is restricted to generic positive versus negative signals; and 4) context seems important for users for the correct interpretation of the expressive behaviour.
In a first perception study we evaluate an educational robot displaying non-verbal behaviours expressing high or low warmth and competence with children at primary schools and a camp. We show that style expression by a humanoid robot is possible. Bodily posture, hand gestures and paralinguistic cues were manipulated to evoke an expression of a specific level of warmth and competence. The competence dimension in our model was successful. Warmth manipulations were perceived as intended only in combination with high-competence. Moreover, context influenced children’s perceptions:
at school the robot was perceived warmer and more competent than at camp.
In a second perception study we evaluate an educational robot displaying non-verbal behaviours expressing high or low dominance. We modulate bodily posture and movement, specifically by manipulating body expansiveness. We show the validity of body expansiveness modulation for dominance expression in both postures and gestures and showthat with a limited set of parameterswe can moderate dominance expression. Specific postures and gestures have a natural tendency towards being perceived as more or less dominant. Further, the manipulation effect is consistent for a variety of behaviours except a sitting pose. This study provides evidence that body expansiveness is an important factor for dominance expression and that this effect is independent of specific behaviours and view angle.
We study the effect of stylised behaviours on children’s learning approach and learning gain by having a NAO robot guide children while performing an inquiry-based science learning task where children roll rollers down a slope to discover laws ofmovement, friction and gravity. Robot style is implemented as variations in verbal strategy and nonverbal style expression, resulting in an expert or facilitator interaction style. No effect of robot interaction style on children’s learning approach or gain is reported. Based on only verbal behaviour variations children perceive the explaining robot (either the expert
style or explaining verbal strategy with neutral non-verbal behaviour) as more competent than the robot giving evidence descriptions (either the facilitator style or evidence descriptions verbal strategy with neutral non-verbal behaviour). These perception differences did not impact the learning approach or gain in the present study. We did not find perception differences based on the variation of non-verbal behaviour. We did find that the presence of a robot giving feedback on children during rolling trials did cause children to play longer and do more informative experiments compared to no feedback. However, this difference in learning approach did not impact learning gain.
The main research question addressed in part II is: How to personalise learning content based on personal learning objectives?
First, we look into how learning goals are formulated in pedagogy and ontologies for education: effective learning goals must attune to the appropriate and desired difficulty level. A way to structure this is Bloom’s taxonomy. A learning goal must also have attributes presenting relations between and descriptions of goals. Then, we model educational objectives (i.e., achievements, learning goals and accompanying tasks) in an ontology. The upper ontology structures the classes and relations and defines domain independent constructs (i.e., level and topic). The domain model specifies diabetes self-management training objectives for young children based on current checklists and expert input. The resulting knowledge base was considered relevant to, and covering,
the diabetes domain to a considerable extent. From this we conclude that our upper model adequately supports the formalisation of implicit knowledge of health care professionals on diabetes self-management training. A field study with children with type 1 diabetes in the Netherlands and Italy showed that an SIA-ITS offering tasks based on our model to support basic needs for autonomy, competence, and relatedness of children with diabetes. For the formalisation of domain specific learning goals, achievements, tasks and materials in the knowledge structure we recommend the following design
guidelines: work in a multidisciplinary team (to define an inventory of important learning goals and define learning activities include domain- and pedagogic experts next to knowledge engineers); formulate achievements from logical learning units (e.g., daily challenges) that require a subset of the knowledge and skills encapsulated in the goals to improve relevance; formulate achievements and goals from the perspective of the child to facilitate ownership and increase experienced relevance; and, define user characteristics relevant to goal and/or task selection. For the integration of the knowledge structure in a multi-modal intelligent tutoring system we recommend the following design guidelines: provide instruction and explanation to the child on how achievements, goals and tasks are selected and can be attained (i.e., that progress on a goal is gained by task completion, and benefits earned by this); embed the objectives in the ITS application to make them easily accessible to the (child) user and integrate them in other system functionality such as feedback on progress provided by a SIA; and offer sufficient learning content such as games and quizzes to maintain interest and engagement.
We developed an authoring tool with a tree-based interface adapted from game design for collaborative personal goal setting and monitoring that implements the ontology of diabetes self-management education, and we co-evaluated this interface with health care professionals. We propose the following design guidelines for an authoring tool: provide clear, visual feedback on goal structure, and active state and progress; consistently use a different representation (e.g., shape) for different concepts of the model (e.g., goal and achievement); cover the full domain and different skill levels with the finite
set of goals; and, support assessment of current abilities next to goal setting, progress monitoring and goal attainment registration.
We developed an mHealth dashboard as interface for personal goal and task selection and monitoring, and co-evaluated this interface with children with diabetes. The interface implements our ontology of diabetes self-management education. The following design elements were understandable for all children: colouring indicating status, and navigation between layers of information. Children experienced difficulties interpreting the meaning conveyed in iconic presentations, understanding of the layered information, and navigation. Based on reported usability issues, we present guidelines for
the design of a dashboard for children: provide descriptive labels next to visual elements because children lack experience using apps and thus understanding of icons and such; connect elements accordingly by placing them in close proximity and in boxes with appropriate labels; ease navigation between layers when hiding detailed information to avoid cognitive overload; and avoid cluttering elements such as navigation bars.
The work in this thesis shows that robots can express different pedagogical styles perceivable by young children. Dominance expression is mainly dependent on body expansiveness. Warmth and competence expression rely on a complex set of behaviour modulations. However, current style variations are too subtle to impact learning approach and gain. With respect to content personalisation, we show that a structure for and selection of learning objectives provide both a personalised learning path as well as personalised content.
Overall we conclude that to impact learning approach and gain not only SIA behaviour must be modulated, it must be noticed by the learner as well. Learning objectives and content should be formalised within a structure and a user-friendly interface is needed to select objectives and tasks with accompanying content, and monitor progress. The success of an SIA-ITS depends on the amount of available content and social interaction
capabilities of the SIA.@en