Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/ampsocie/public_html/include/database.php on line 29

Warning: session_start(): Cannot send session cookie - headers already sent by (output started at /home/ampsocie/public_html/include/database.php:29) in /home/ampsocie/public_html/include/session.php on line 55

Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /home/ampsocie/public_html/include/database.php:29) in /home/ampsocie/public_html/include/session.php on line 55

Warning: session_regenerate_id(): Cannot regenerate session id - headers already sent in /home/ampsocie/public_html/include/session.php on line 56
AMPS: Past Seminar Series

Past Seminar Series

Select the that year you would like to see

2005 Seminar Series


What is Music? The Super-Stimulus Theory
Speaker : Philip Dorrell
Date :4th Jan

Musical Experience as an Aid to Language Learning
Speaker : Katie Overy School of Music, University of Edinburgh
Date :4th Mar

Listener Expertise in Musical Expectation
Speaker : Freya Bailes University of Canberra
Date :13th May

Abstract:

This paper reports a study that began by asking how it is that listeners are able to recognise a melody and anticipate the correct continuation after hearing only a few notes. Dalla Bella, Peretz & Aronoff (2003) addressed this issue and found differences in melody recognition between those with and without musical training which they explained in terms of cohort theory. The relevance of this linguistic theory for musical expectation will be discussed alongside findings from a recent ‘Name that Tune' study. In this study a series of models of melody recognition were developed and evaluated. These models were based on information relating to melodic familiarity and distinctiveness. Familiarity measures were gathered from 32 listeners (16 musicians and 16 non-musicians) judging 120 melodies. Distinctiveness measures were derived from statistical analyses of over 15,000 Western themes and melodies. The models were evaluated by attempting to predict points-of-recognition in a ‘Name that Tune' experiment.

Results from the experimental component of the study do not support the findings by Dalla Bella et al. with respect to the role of musical expertise in melody recognition. Rather, simple correlations exist between musicianship, familiarity with the melodic stimuli, and the early recognition of a melody. These results raise important questions concerning what it means to be musically expert. The association of greater confidence with task performance is explored. Discussion will also focus on defining musical expertise as familiarity with a particular musical genre, and on schematic versus veridical expectations. The paper will conclude with a number of theoretical predictions concerning the role of musical expertise in listening to a different style of music, namely contemporary computer music. Results from a series of experiments investigating the detection of structure in digital music will be presented as a basis for these predictions.

The Role of Sensitising Performance Experiences in Music Performance Anxiety
Speaker : Margaret Osborne, Dianna Kenny Australian Centre for Applied
Date :3rd Jun

Abstract:

Aversive performance incidents play a role in the development of some anxiety disorders (Beck, 1995; Beck, Emery & Greenberg, 1985; Barlow, 2002). The role of sensitizing experiences in the development of music performance anxiety (MPA) in adolescent music students has not yet been explored. Two hundred and ninety eight music students were asked to provide written descriptions of their worst performance, what happened and how they felt, specifying their age at the time, audience members, and any events that occurred subsequent to the performance.

Descriptions were classified according to six domains: situational and behavioural factors, affective, cognitive and somatic symptoms of anxiety, and outcome. Accounts were scored in each domain and a total score was calculated. Scores were summed to provide a linear scale that was compared to self-report MPA (measured using the Music Performance Anxiety Inventory for Adolescents, Osborne & Kenny, 2004) and standardised trait anxiety scores (measured using the State Trait Anxiety Inventory, Speilberger, 1983). Results indicated that MPA was best predicted by trait anxiety and gender, and that the presence of negative cognitions in their worst experience account improved the prediction of MPA over trait anxiety and gender alone. None of the other factors added to the prediction. Females reported more emotional distress than males and had significantly higher total scores. These findings confirm patterns found in adult performers and across other forms of performance anxiety in children (e.g. test anxiety). This study highlights cognitions as an important element to address in the treatment of MPA in young musicians.

Corpus-Driven Computer Music Generation Techniques
Speaker : Michael Chan University of New South Wales
Date :24th Jun

Abstract:

This paper introduces the Automated Composer of Style Sensitive Music (ACSSM) system, with the goal of automatically constructing musical work that imitates a given musical style, as well as conveying musical meaning. Various hierarchical structures are used to model music in order to adapt several existing algorithms and to provide a selection of analysis and generation methods based on techniques from Artificial Intelligence and Natural Language Processing.

The basis of ACSSM is a structural algorithm that imitates musical style; in this work we have largely attempted to re-create and implement the approach taken by Cope. We further extend the approach using concepts derived from the Generative Theory of Tonal Music, which attempts to provide a deeper model for musical style. ACSSM accepts a corpus of classical pieces from a particular composer, in an XML-based format, and produces new musical works that sound more musical and in the style of the given composer than do most other current algorithmic methods. To assess the effectiveness of our technique, we present the results of a preliminary public assessment of a collection of works generated by a prototype implementation of ACSSM and various other automated methods. Current limitations of our system are also presented.

One Two Three O'clock Four O'clock Croc! What Alligators Can Tell Humans about Making Sweet Music
Speaker : Neil Todd, University of Manchester
Date :15th Jul

Abstract:

Over the last decade I have been promoting the theory that a primitive acoustic sense inherited from our swampy ancestors has been conserved in all vertebrates from fish to humans (Todd and Merker, 2004). This sense is mediated by the sacculus which is conventionally considered to be part of the balance system in mammals but is a hearing organ in fish and can be activated by loud low frequency sounds and vibrations in humans. In anamniotes (i.e. fish and frogs) the primitive sense plays an important role during vocal courtship displays via a central brain projection to areas involved in producing autonomic responses. According to the theory the primitive acoustic sense has also retained a function in amniotes (reptiles, birds and mammals) for mediating autonomic responses during vocal courtship displays and in the case of humans during dance and loud music (arguably a form of vocal courtship display).

In order to substantiate the theory I have been engaged in a programme of research investigating the acoustic properties of the vocal displays of various amniote species. Alligators are particularly interesting for this purpose as they are aquatic, highly vocal and morphologically have alarge sacculus. In this talk I present results of a study of the vocal display of the American alligator (alligator mississippiensis) carried out at the Australian Reptile Park . Simultaneous recordings in air and water were made of vocal bouts stimulated by playback through an adapted car-stereo system, the "croc blaster".

Analysis of about one hundred edited extracts indicates that under water the alligator calls are characterised by being very loud (up to 140 dB at 1 m) and dominated by low-frequencies 16-32 Hz. These sorts of sounds are very effective at activating the sacculus particularly when transmitted by bone-conduction, as is the case in underwater hearing. It is likely therefore that the primitive acoustic sense is functional in alligators. In order to provide further support for this case I show recent results that the human sacculus is highly responsive to low-frequency vibrations and indeed can be activated by alligator calls. These results are relevant to humans as much human dance music is characterised by loud, low-frequency sounds. The gators then may have a few tricks to teach us humans about making music.

Todd, N.P.McAngus and Merker, B. (2004) Siamang gibbons exceed the saccular threshold. Journal of the Acoustical Society of America 115(6), 3077-3080.

Looking at Singing: Does Real-time Visual Feedback Improve the Way We Learn to Sing?
Speaker : Pat Wilson University of Sydney
Date :29th Jul

Abstract:

This paper, authored by Pat Wilson, William Thorpe and Jean Callaghan, and presented by Pat Wilson, looks at singers, computers, singing teachers, cognitive load, visual feedback, acoustic analysis and motor skills learning.

To find out what happens to skills acquisition when a learner singer is shown a visual analogy of aspects of their voice on a computer screen in real time, the investigation assessed 56 participants (age range = 18-60 years) with skills ranging from confident trained singer to untrained non-singer. Two functionally different display screens offering visual feedback were used with two groups of participants; the third (control) group had a non-interactive screen display.

In a straightforward pre-test/intervention/post-test structure, demographic and acoustic data were obtained. Although all results have yet to be analysed, early results indicate

* Offering knowledge of results (KR) during learning can worsen that performance, but
* Subsequent performance may be improved.
* Singers with some training require less complex, contextualised feedback that novices, while
* Novice singers show greater improvement in their skills when given the more complex feedback.

This paper accepted for presentation at APSCOM2, (2nd conference of the Asia-Pacific Society for the Cognitive Science of Music) at Seoul, South Korea, 4th – 6th August 2005, and generously supported by the 2005 AMPS Graduate Student International Conference Travel Assistance Scheme.]

Mozart or Manson?: The Role of Preference in Music Listening for Pain Relief
Speaker : Laura Mitchell Glasgow Caledonia University
Date :2nd Sep

Abstract:

Research studies of 'audioanalgesia', the ability of music to affect pain perception, have significantly increased in number during the past two decades. The majority of these studies have used music selected by the experimenters for perceived relaxing qualities. Our own preferred type of music, however, may provide an emotionally engaging distraction more capable of reducing both the sensation of pain itself and the accompanying negative affective experience.

This talk will discuss a series of experimental studies which aimed to use rigorous, controlled methodology in order to fully investigate the effects of music listening on pain and overcome methodological flaws and incomplete reporting contained in previous work. These studies provide evidence of the effectiveness of preferred music listening in distracting attention from pain and increasing feelings of control over the experience. A further survey of the music listening behaviour of chronic pain patients and their perceptions of its usefulness also suggests beneficial effects to continue in longer term pain.

Sinusoidal Frequency Estimation based on the Time Derivative of the STFT Phase Response
Speaker : David Gunawan School of Electrical Engineering, UNSW
Date :25th Nov

Abstract:

The estimation of sinusoidal parameters is a widely studied area and has been extensively employed in many audio applications. Sinusoidal modelling in particular has been widely used to represent the dominant harmonic components found in musical signals and a major component of such modelling requires the accurate estimation of sinusoidal parameters. This paper presents the Phase Derivative FFT (PDFFT) - a computationally efficient algorithm for estimating the frequency of a sinusoid from the Short Time Fourier Transform (STFT). Upon obtaining initial coarse estimates from the FFT of a given frame, the PDFFT makes further refinement to the frequency estimate using only the time derivative of the phase response. The algorithm is derived and is shown to require only 4 multiplies per peak. Single frequencies in the presence of noise are resolved well, outperforming the commonly used Quadratically Interpolated FFT (QIFFT) method even with zeropadding. The algorithm is then used to separate two sinusoids of close frequency proximity that appear as a single peak in the magnitude spectrum.