In this paper, we discuss the social intelligence that renders affective behaviors of intelligent agents and its application to a collaborative learning system. We argue that socially appropriate affective behaviors would provide a new dimension for collaborative learning systems. The description of a system to recognize the six universal facial expressions (happiness, sadness, anger, fear, surprise, and disgust) using an agent-based approach is presented. Then, we describe how emotions can efficiently and effectively be visualized in CVEs, with an animated virtual head (Emotional Embodied Conversational Agent) that is designed to express and act in response to the ‘universal facial expressions’. The objective of the paper is to present the emotional framework -EMASPEL (Emotional Multi-Agents System for Peer to peer E-Learning) – based on the Multi-Agents Architecture approach.
Received: September 17th, 2007
Revised: September 30th, 2007