Poster Craze

10:40-11:00 on Tuesday, 3rd September

P350 Lecture Theatre, Parkside

Chiar:
Real-Time Implementation of an Elasto-Plastic Friction Model using Finite-Difference Schemes
Silvin Willemsen, Stefan Bilbao and Stefania Serafin

The simulation of a bowed string is challenging due to the strongly non-linear relationship between the bow and the string. This relationship can be described through a model of friction. Several friction models in the literature have been proposed, from simple velocity dependent to more accurate ones. Similarly, a highly accurate technique to simulate a stiff string is the use of finitedifference time-domain (FDTD) methods. As these models are generally computationally heavy, implementation in real-time is challenging. This paper presents a real-time implementation of the combination of a complex friction model, namely the elastoplastic friction model, and a stiff string simulated using FDTD methods. We show that it is possible to keep the CPU usage of a single bowed string below 6 percent. For real-time control of the bowed string, the Sensel Morph is used.

View Paper
Improving Monophonic Pitch Detection Using the ACF And Simple Heuristics
Carlos de Obaldía and Udo Zölzer

In this paper a study on the performance of the short time autocorrelation function for the determination of correct pitch candidates for non-stationary sounds is presented. Input segments of a music or speech signal are analyzed by extracting the autocorrelation function and a weighting function is used to weight candidates for assessing their harmonic strength. Furthermore, a decision is devised which alerts if there are possible non-related jumps on the fundamental frequency track. A technique to modify the spectral content of the signal is presented to compensate for these jumps, and a heuristic to return a steady fundamental frequency track for monophonic recordings is presented. The system is evaluated with several databases and with other algorithms. Using the compensation algorithm increases the performance of the ACF and outperforms current detection algorithms.

View Paper
A Real-Time Audio Effect Plug-In Inspired by the Processes of Traditional Indonesian Gamelan Music
Luke Craig and R. Mitchell Parry

This paper presents Gamelanizer, a novel real-time audio effect inspired by Javanese gamelan music theory. It is composed of anticipatory or “negative” delay and time and pitch manipulations based on the phase vocoder. An open-source real-time C++Virtual Studio Technology (VST) implementation of the effect, made with the JUCE framework, is available at github.com/lukemcraig/ DAFx19-Gamelanizer, as well as audio examples and Python implementations of vectorized and frame by frame approaches.

View Paper
Exploring audio immersion using user-generated recordings
Daniel Gomes, Joao Magalhaes and Sofia Cavaco

The abundance and ever growing expansion of user-generated content defines a paradigm in multimedia consumption. While user immersion through audio has gained relevance in the later years due to the growing interest in virtual and augmented reality immersion technologies, the existent user-generated content visualization techniques are still not making use of immersion technologies. Here we propose a new technique to visualize multimedia content that provides immersion through the audio. While our technique focus on audio immersion, we also propose to combine it with a video interface that aims at providing an enveloping visual experience to end-users. The technique combines professional audio recordings with user-generated audio recordings of the same event. Immersion is granted through the spatialization of the user generated audio content with head related transfer functions.

View Paper
Analysis and Correction of Maps Dataset
Xuan Gong, Wei Xu, Juanting Liu and Wenqing Cheng

Automatic music transcription (AMT) is the process of converting the original music signal into the digital music symbol. The MIDI Aligned Piano Sounds (MAPS) dataset was established in 2010 and is the most used benchmark dataset for automatic piano music transcription. In this paper, error screening is carried out through algorithm strategy, and three data annotation problems are found in ENSTDkCl, which is a subset of MAPS, usually used for algorithm evaluation: (1) there are 342 deviation errors of midi annotation; (2) there are 803 unplayed note errors; (3) there are 1613 slow starting process errors. After algorithm correction and manual confirmation, the corrected dataset is released. Finally, the better-performing Google model and our model are evaluated on the corrected dataset. The F values are 85.94% and 85.82%, respectively, and it is correspondingly improved compared with the original dataset, which proves that the correction of the dataset is meaningful.

View Paper
Optimization of audio graphs by resampling
Pierre Donat-Bouillud, Jean-Louis Giavitto and Florent Jacquemard

Interactive music systems are dynamic real-time systems which combine control and signal processing based on an audio graph. They are often used on platforms where there are no reliable and precise real-time guarantees. Here, we present a method of optimizing audio graphs and finding a compromise between audio quality and gain in execution time by downsampling parts of the graph. We present models of quality and execution time and we evaluate the models and our optimization algorithm experimentally.

View Paper
Keytar: Melodic control of multisensory feedback from virtual strings
Federico Fontana, Andrea Passalenti and Stefania Serafin

A multisensory virtual environment has been designed, aiming at recreating a realistic interaction with a set of vibrating strings. Haptic, auditory and visual cues progressively istantiate the environment: force and tactile feedback are provided by a robotic arm reporting for string reaction, string surface properties, and furthermore defining the physical touchpoint in form of a virtual plectrum embodied by the arm stylus. Auditory feedback is instantaneously synthesized as a result of the contacts of this plectrum against the strings, reproducing guitar sounds. A simple visual scenario contextualizes the plectrum in action along with the vibrating strings. Notes and chords are selected using a keyboard controller, in ways that one hand is engaged in the creation of a melody while the other hand plucks virtual strings. Such components have been integrated within the Unity3D simulation environment for game development, and run altogether on a PC. As also declared by a group of users testing a monophonic Keytar prototype with no keyboard control, the most significant contribution to the realism of the strings is given by the haptic feedback, in particular by the textural nuances that the robotic arm synthesizes while reproducing physical attributes of a metal surface. Their opinion, hence, argues in favor of the importance of factors others than auditory feedback for the design of new musical interfaces.

View Paper
Statistical Sinusoidal Modeling for Expressive Sound Synthesis
Henrik von Coler

Statistical sinusoidal modeling represents a method for transferring a sample library of instrument sounds into a data base of sinusoidal parameters for the use in real time additive synthesis. Single sounds, capturing an instrument in combinations of pitch and intensity, are therefor segmented into attack, sustain and release. Partial amplitudes, frequencies and Bark band energies are calculated for all sounds and segments. For the sustain part, all partial and noise parameters are transformed to probabilistic distributions. Interpolated inverse transform sampling is introduced for generating parameter trajectories during synthesis in real time, allowing the creation of sounds located at pitches and intensities between the actual support points of the sample library. Evaluation is performed by qualitative analysis of the system response to sweeps of the control parameters pitch and intensity. Results for a set of violin samples demonstrate the ability of the approach to model dynamic timbre changes, which is crucial for the perceived quality of expressive sound synthesis.

View Paper