Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543339
Title: Analysis, modelling, and synthesis of everyday impact sounds
Author: Ahmad, Wasim
ISNI:       0000 0004 2713 3962
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2011
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
The environment we live in contains diverse types of impact sounds such as hitting, collision, bumping, breaking, bouncing and dripping. Pre-recorded versions of this type of sounds are extensively used in interactive and virtual reality applications. However, in those environments, audio rendering is often limited to the playback of prerecorded samples, possibly with processed amplitude, pitch or filter-envelopes. Due to their static nature, pre-recordings alone cannot fully satisfy the sound rendering requirements of the large variety of situations, such as those usually found in current video games. Consequently, it is difficult to match the available recorded sound samples with the simulated interactive animation which, in many cases, causes discrepancies between the generated visuals and their associated pre-recorded sounds. Another problem encountered in interactive environments is that it is not possible to know in advance the length of the sound needed for a particular simulated situation. For example, in a computer game the user might stay in the same environment for several minutes. If the recorded source sound is too long it can be easily truncated or faded out, but it cannot be easily extended. As a consequence, the sound designer is limited to playing back a small set of recordings repetitively, which can be tedious to the listener. To tackle these issues, two content-based analysis/synthesis (CBAS) algorithms are proposed in this thesis. Our objectives are twofold: develop algorithms that can be used to optimally represent a large set of impact sounds data through analysis, and to generate realistic and expressive continuous sounds that can be used in interactive and multimedia applications. First, our work presents a new shift-invariant analysis scheme for transient impact sounds. The first algorithm, wavelet additive synthesis (WAS), models the sounds in spectral domain. The WAS algorithm applies minimum-phase reconstruction based discrete wavelet transform (MiP-DWT) to decompose the impact sounds into frequency bands and each band is parameterised in the form of orthogonal basis and their weights. During synthesis process, these weight vectors are selected and tuned according to the target sound parameters. The second algorithm, sound texture synthesis (STS), models the sounds in temporal domain and uses sound texture (grains) to create finely-controlled synthesised impact sounds. During the analysis stage, a set of pre-recoded impact sounds are decomposed into multi-level time-scale components or grains. The extracted sound grains are optimised using dictionary learning algorithm.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.543339  DOI: Not available
Share: