Responsable LIMOS : FAVREAU Jean-Marie
Begin of project : Oct. 1, 2021 - Fin du projet : June 30, 2022
Projet piloté par le LIMOS


Development of a virtual immersion system for the spatialization of sound for visually impaired people.

The appropriation of urban space by visually impaired people intensely solicits the non-visual senses, and in particular hearing. Whether it is by listening directly to sound sources, by consciously or unconsciously taking into account sound reverberations, or by producing sounds for active echolocation, there are many uses in daily practices. These sound environments can also be the source of difficulties in practising public space, when the sound volume is too intense, the sources are mixed, or this environment activates more or less identified psychological barriers.

There are several professions around the visually impaired that accompany them in their discovery and appropriation of these problems: locomotion instructors, psychologists, speech therapists, etc. Each of these professions explores with its own tools the possibilities offered by hearing in the discovery of the world. However, to our knowledge, there is no democratised tool that allows the reproduction of a sound environment in a controlled space, such as a virtual reality simulator.
This project, carried out by the Centre de Rééducation pour Déficients Visuels (CRDV) and the Laboratoire d'Informatique, de Modélisation et d'Optimisation des Systèmes (LIMOS), aims to develop a sound spatialization device composed of sound diffusion devices and a control station, and allowing each professional to quickly take control of the device for his or her specific needs, by proposing a set of pre-designed activities and environments, and by allowing these configurations to be extended to more specific needs.

There are various technical solutions for spatializing sound for immersive listening. However, as our objective is to allow both the reproduction of mobile sources and the possibility for the listener to move around the scene, the number of solutions becomes immediately more limited. These existing solutions often require active tracking of the listener for a sound reproduction calculated in real time according to his position. However, visually impaired people have a very fine practice of auditory localisation, which involves micro-movements of the head, making it very difficult to reconstitute a spatialised sound based solely on digital processing.
The approach that we propose to follow consists of a multi-point diffusion approach, where the spatialization of the sound will be obtained by exploiting a network of speakers, for a diffusion of the sound as close as possible to the location of the virtual object that produces it. The technical and scientific challenges of this project concern the recording of isolated sound sources, the assembly of suitable broadcasting equipment, the restitution of sounds in a multi-point virtual sound space, and the design of an interface allowing both precise and ergonomic control of the device by non-professional sound attendants.

partner organism :

Financeur : Autre (saisir financier specifique)
Financeur spécifique : Clermont Auvergne Métropole