Emotional agents for collaborative elearning

In this paper, we discuss the social intelligence that renders affective behaviors of intelligent agents and its application to a collaborative learning system. We argue that socially appropriate affective behaviors would provide a new dimension for collaborative learning systems.  The description of a system to recognize the six universal facial expressions (happiness, sadness, anger, fear, surprise, and disgust) using an agent-based approach is presented. Then, we describe how emotions can efficiently and effectively be visualized in CVEs, with an animated virtual head (Emotional Embodied Conversational Agent) that is designed to express and act in response to the ‘universal facial expressions’. The objective of the paper is to present the emotional framework -EMASPEL (Emotional Multi-Agents System for Peer to peer E-Learning) – based on the Multi-Agents Architecture approach.

Received: September 17th, 2007

Revised: September 30th, 2007

View full Article in PDF

Series Navigation<< Character animation from 2D pictures and 3D motion dataThe Quality Adaptation Model: adaptation and adoption of the quality standard ISO/IEC 19796-1 for learning, education and training >>