Affective multimedia databases – state of the art (3)

With regard to implementation, affective multimedia databases have very simple structures usually consisting of a file repository and a manifesto file that describes the content of the repository. The manifesto is a text, Microsoft Excel or comma-separated values (CSV) formatted file with attributes such as unique identifier, semantic descriptor and eliciting affect for each multimedia document in the repository. Through the combination of unique folder paths and unique file names the stimuli get their Uniform Resource Identificators (URIs) which allows their retrieval from affective multimedia databases. In the case of IAPS the unique identifier is the name of the stimulus file, e.g. 5200.jpg, 5201.jpg etc. Therefore, contemporary affective multimedia databases are neither relation databases nor XML databases – they are just document repositories with a description document in a provisional format which is sometimes only human-readable.

Some affective multimedia databases like GAPED, BioID faces database, NimStim, or KDEF even do not have a describing file. In these databases semantic and affective stimuli content must be implicitly deduced using stimuli paths. This is the simplest possible database structure but only minimally sufficient in conveying the stimuli content and meaning to a subject. With such databases it is expected that the user must look over all stimuli and select them manually. If the database does not contain many stimuli per single level of semantic taxonomy manual document retrieval from these databases is still manageable. For example, GAPED – on of the the latest affective multimedia databases developed – has 730 negative, neutral and positive pictures with up to 159 different stimuli in a single named folder.

In terms of knowledge management affective multimedia databases contain only data about the stimuli themselves and do not describe semantics induced by the stimuli. If a stimulus video shows a dog barking the database does not specify concepts “animal”, ”dog”, ”to bark”, ”dog barking” etc. Affective multimedia databases do not contain knowledge taxonomies or any other information about concepts present in stimuli; they only state the most dominating semantic label present in stimuli. These databases do not provide descriptions of semantic labels nor reference them to knowledge bases such as DBPedia. It is entirely up to the database expert to integrate an affective multimedia database in a larger system and find the most appropriate knowledge base with a reasoning service to infer the meaning of multimedia semantic descriptors.

Leave a Reply