Fundamental frequency (Fo) patterns are analysed in six recordings of the Romance from the First Act of Guiseppe Verdi’s opera Aida. Two of the recordings were sung by the late Swedish tenor Jussi Björling and the remaining four by other international premiere tenors. Fo tracking was carried out semi-automatically using the autocorrelation program of the Soundswell Signal Workstation™ software. Intonation characteristics were measured in relation to equally tempered tuning (ETT) based on the tuning of the orchestra. Great individual differences are found. The mean deviation from ETT varied between - 15 cent and + 30 cent. Only Björling tended to increasingly sharpen intonation, the higher he sang in his passagio region. Moreover, in the long sustained high note at the end of the Recitative he added a Portamento at the end, while the other singers increased Fo by about 40 cent over the same tone. Vibrato rate and extent were similar among the singers, but spectrum analysis of the vibrato waveform revealed various differences. The final descending octave interval exceeded the 2:1 frequency ratio in all singers except one. The results are discussed from the points of view of interval perception, performance practise and musical expression.
This is an overview of the work with synthesizing singing that has been carried out at the Speech Music Hearing Department, KTH since 1977. The origin of the work, a hardware synthesis machine, is described and some aspects of the control program, a modified version of a text-to-speech conversion system are reviewed. Three applications are described in which the synthesis system has paved the way for investigations of specific aspects of the singing voice. One concerns the perceptual relevance of the center frequency of the singer's formant, one deals with characteristics of an ugly voice, and one regards intonation. The article is accompanied by 18 sound examples, several of which were not published before. Finally, limitations and advantages of singing synthesis are discussed.
The KTH rule system models performance principles used by musicians when performing a musical score, within the realm of Western classical, jazz and popular music. An overview is given of the major rules involving phrasing, micro-level timing, metrical patterns and grooves, articulation, tonal tension, intonation, ensemble timing, and performance noise. By using selections of rules and rule quantities, semantic descriptions such as emotional expressions can be modeled. A recent real-time implementation provides the means for controlling the expressive character of the music. The communicative purpose and meaning of the resulting performance variations are discussed as well as limitations and future improvements.
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.