EASAIER User Case Scenarios
Possible scenarios where the system could be used. click here
EASAIER Demos
Experience some of the features of the system.EASAIER Public Deliverables
D3.2 Prototype on speech and music retrieval systems with vocal query interface
D3.3 Prototype on cross media retrieval system
D4.1 Prototype segmentation, separation and speaker/instrument identification system
D4.2 Prototype Transcription system
D5.1 Protoype of Looping and Marking modules
D5.2 Time stretching modules with synchronized multimedia prototype
The FP6 project EASAIER developed a state-of-the-art system for archiving audio and related materials, as well as web-access and enriched access stand alone client. Many digital sound archives still suffer from tremendous problems concerning access. Materials are often in different formats, with related media in separate collections, and with non-standard, specialist, incomplete or even erroneous metadata. Thus, the end user is unable to discover the full value of the archived material.
EASAIER addresses these issues with the development of an innovative remote access system which extends beyond standard content management and retrieval systems. The EASAIER system has been designed with sound archives, libraries, museums, broadcast archives, and music schools in mind. However, the tools may be used by anyone interested in accessing archived material; amateur or professional, regardless of the material involved. Furthermore, it enriches the access experience enabling the user to experiment with the materials in exciting new ways. The system features include enhanced cross media retrieval functionality, multi-media synchronisation, audio and video processing, and analysis and visualisation tools, all combined within a single user configurable interface.
The EASAIER prototype will be deployed in sound archives in various EU countries on national and international level. We expect that the users of the EASAIER system will benefit from our vision of an intuitive and accessible retrieval engine, giving them the opportunity to search music, speech and related materials (images / videos), not only using metadata, but also based on content similarity. Retrieved material and their features may be listened to, visualised and annotated from within an interface which is configurable by the user. The user can retrieve content based on a variety of advanced features, mark the retrieved audio segments of interests and loop through the marked areas, slow down the performance without changing pitch, and separate out various instruments or speakers.
The EASAIER project, now in its maturity, has gone well beyond the state-of-the-art in the targeted field of multimedia processing, archiving and accessing. An innovative approach to knowledge representation has led to development of the music ontologies and the audio features ontology, which are now widely used outside the consortium. In the field of automatic feature extraction, we have developed new, high performance methods to identify and characterise sound objects (emotion detection, laughter detection, key extraction, tempo identifier...). In the area of presentation of multimedia material, a novelty is presented in a sound source separation, equalisation and noise reduction algorithms. A key innovation also allows the video stream to be synchronised with the audio during real-time time and pitch scaling.