Formal representation of emotions in computer systems

From the perspective of computer systems emotions are difficult to work with. Knowledge about emotion is often uncertain, either because of unavailable or missing data, or due to an uncertainty and unreliability due to errors in the measurement of emotions. Also, computer models of emotion are just being developed and have not yet reached a level of maturity and widespread use that would allow efficient interpretation and processing of data about emotions and taking decisions on the basis of the set goals.

Today, there are several computer languages that were designed specifically for annotation of different emotion states present (i.e. that can be perceived) in multimedia files. These languages have different levels of expressivity and formality, and are used for various purposes. The most important and frequently used are: Synchronized Multimedia Integration (SMIL), Speech Synthesis Markup Language (SSML), Extensible MultiModal Annotation Markup Language (EMMA), Emotion Annotation and Representation Language (EARL) and Virtual Human Markup Language (VHML). All these meta-formats for describing emotions are stored in formatted text files that are used to annotate data about the emotions that are found in other files. None of the existing meta-formats are based on logic. Annotated files may be of any format. The latest emotion language to be developed – in fact, its development is still ongoing – is Emotion Markup Language, or Emotion ML for short. EmotionML is being developed under the umbrella of W3C bringing together different partners from academia and industry such as DFKI, Deutsche Telekom, Fraunhofer Institute, Nuance Communications etc. EmotionML is based on XML so it is easy to build, parse and maintain, it is not tied to a specific platform, and it can also be read by human experts. It is designed as a “general purpose annotation language” and has largest vocabulary of all existing emotion languages.

EmotionML was developed as a plug in language which may be included in a variety of applications in three main areas:

  1. Manual annotation of data;
  2. Automatic recognition of emotion-related states from user behavior; and
  3. Generation of emotion-related system behavior

In practical terms, EmotionML is very good for annotation of affect in multimedia content. Once when the content is annotated it is then possible to store it in a XML database and retrieve it according to various query parameters supported by EmotionML syntax and semantics. For example, it is possible to represent emotion categories (i.e. discrete emotions such as anger, disgust, fear, happiness, sadness, and surprise), values of different emotion dimensions, appraisals and action tendencies, expert confidence in annotations, emotion expression modalities (e.g. voice), start and end times of specific emotions in video or sound files, time course of emotions etc.

As could be expected, EmotionML does not define how to annotate emotion as such, i.e. how to build EmotionML files. The files content may come from subjects’ questionnaires, expert interviews, automated physiology-based emotion estimation or any other method that could be suitable for identification of specific values defined by EmotionML semantics. However, neither approach is without its drawbacks. The choice which to use will depend on a set of circumstances in each separate case leveraging trade-offs between accuracy, time and implementation cost.

However, although EmotionML may excel in terms of its rich semantics and simple and effective syntax, it is still a purely XML language without any capabilities for higher knowledge representation and automatic reasoning as with, for example, RDF, RDFS and OWL. EmotionML is indeed very good for information storage and interchange, but it does not define mechanisms for using the stored information, patterns what to do with the information, methods or best practices how to implement custom tools for reading, processing and writing EmotionML statements etc. All these higher processes and complex tasks are left to individual researchers, their particular requirements and implementation capabilities.