THE MUSIC COMPUTING LABTHE MUSIC COMPUTING LAB
View this PageEdit this Page (locked)Uploads to this Page (locked)Versions of this Page over TimePrintable Version of this PageHome PageRecent ChangesSearchSign In

MUSIC COMPUTING LAB MEETINGS



The Music Computing Lab has regular research meetings on Tuesdays at 3pm. The meeting format is flexible and includes research talks, seminars, demonstrations of prototypes, discussions of selected journal articles, and hands-on tutorials of tools and techniques.


Forthcoming MeetingsLocked

Regular Music Computing Research meetings continue every other Thursday at 3pm.

Previous Talks and Seminars

21 Feb 2013

Roundtable discussion of recent literature of interest
  • Influence of the tonality of Japanese Traditional music on Japanese approaches to Western music from classical to J-Pop.
  • Music as Narrative (Fred Everett Maus).
  • PSYSOUND3: Software for Acoustical and Psychacoustical Analysis of Sound recordings (Densil Cabrera, Sam Ferguson, and Emery Schubert) and related tool boxes.
  • David Huron, Sweet Expectation.
  • Novice Collaboration in Solo and Accompaniment Improvisation Hansen and Anderson.
  • Relation between Language Learning and Music Learning in Young Children.
  • Steven Mithen, The Singing Neanderthals.

6th May 2011

(Talks by Tom Collins and Vassilis Angelis as part of Music Postgraduate Research Day, 6th May 2011)

Discovering translational patterns in symbolic representations of music

Tom Collins
Typically, to become familiar with a piece, one studies/plays through the score and listens, gaining an appreciation of where and how material is reused. The literature on music information retrieval (MIR) contains several algorithmic approaches to this task, referred to as ‘intra-opus’ pattern discovery. Given a piece of music in a symbolic representation, the aim is to define and evaluate an algorithm that returns patterns occurring within the piece. Some potential applications for such an algorithm are: (1) a pattern discovery tool to aid music students; (2) comparing an algorithm’s discoveries with those of a music expert as a means of investigating human perception of music; (3) stylistic composition (the process of writing in the style of another composer or period) assisted by using the patterns/structure returned by a pattern discovery algorithm. The presentation will look at how my research has improved upon current pattern discovery algorithms.

A preliminary investigation of a computational model of rhythm perception using polyrhythms as stimuli

Vassilis Angelis
Different models have been developed to explain how humans perceive rhythm in music. Here we concentrate on a computational model that employs a neurobiological approach, according to which aspects of rhythm perception could be directly grounded on the dynamics of neural activity (Large et al, 2010). To date, testing on this model has been done mainly by stimulating it with metrical stimuli. The outputs of the model have been utilised to provide potential explanations about certain behaviours encountered in rhythm perception, such as the tendency of human tapping to precede sequence tones by a few tens of milliseconds (Large, 2008). In this paper we present a preliminary investigation of this model using polyrhythmic stimuli, the assumptions involved in carrying out this investigation, and the obtained results. To explore how well the computational model matches the range of human tapping behaviour in polyrhythms, we used as a bench mark an experiment by Handel & Oshinsky (1981) on human subjects and polyrhythms, in which subjects were asked to tap along with the polyrhythmic stimuli, implicitly leaving them the choice of tapping out either one of the regular streams, or the cross-rhythm, or any other way.

28th April 2011

Buzzing to Play: Lessons Learned From an In the Wild Study of Real-time Vibrotactile Feedback

Janet van der Linden The Open University,
Rose Johnson The Open University,
Jon Bird The Open University,
Yvonne Rogers The Open University,
Erwin Schoonderwaldt Institute for Music Physiology and Musicians' Medicine
Abstract
Vibrotactile feedback offers much potential for facilitating and accelerating how people learn sensory-motor skills that typically take hundreds of hours to learn, such as learning to play a musical instrument, skiing or swimming. However, there is little evidence of this benefit materializing outside of research lab settings. We describe the findings of an in-the-wild study that explored how to integrate vibrotactile feedback into a real-world teaching setting. The focus of the study was on exploring how children of different ages, learning to play the violin, can use real-time vibrotactile feedback. Many of the findings were unexpected, showing how students and their teachers appropriated the technology in creative ways. We present some ‘lessons learned’ that are also applicable to other training settings, emphasizing the need to understand how vibrotactile feedback can switch between being foregrounded and backgrounded depending on the demands of the task, the teacher’s role in making it work and when feedback is most relevant and useful. Finally, we discuss how vibrotactile feedback can provide a new language for talking about the skill being learned that may also play an instrumental role in enhancing learning.
(Hosted by HCI seminar series)

6th April 2011

SuperCollider Workshop.

Gerard Roma (visiting researcher from Universitat Pompeu Fabra, Barcelona) kindly ran a superb and well-received hands-on workshop and theoretical overview of SuperCollider.
http://supercollider.sourceforge.net/

30th March 2011

Kindly co-hosted by the Human-Centred Computing Seminar Series
Dan Stowell (Queen Mary University of London)

Developing and evaluating systems for cyber-beatboxing

ABSTRACT
Most of us make expressive use of our voice timbre in everyday
conversation; and beatboxers and other extended-technique vocal
performers take timbre modulations to another level. Yet vocal timbre is
an under-utilised dimension in musical interfaces, perhaps because of
difficulties in analysing and mapping timbre. In this talk Dan will
discuss his research on vocal timbre interfaces, considering different
technical strategies to achieve effective real-time mappings useful for
on-stage performance.
Evaluating such systems is crucial for understanding how they succeed
and fail, and how they might be adopted into performers' practice, yet
evaluation through standard task-focussed experiments is less useful for
expressive musical systems. Dan will discuss the development of a
qualitative approach used to explore how beatboxers understand a system
after interacting with it.

29th March 2011

Gerard Roma, visiting student from the Universitat Pompeu Fabra in Barcelona, will give us an informal presentation about his work. He is a PhD student in their Music Technology Group, working on sound description. Feel free to bring others along who may beinterested.

15th March 2011

Tom Collins will give a short talk about a model for stylistic composition and its evaluation. There are two related related papers:
  • Pearce, M.T., and G.A. Wiggins, 'Evaluating cognitive models of musical composition', in eds. A. Cardoso and G.A. Wiggins, Proceedings of the Fourth International Joint Workshop on Computational Creativity, (Goldsmiths, University of London, 2007), 73-80.
  • Collins, David, 'A synthesis process model of creative thinking in music composition', in Psychology of Music 33(2) (2005), 193-216.

1st March 2011

Anna Xambó and Rose Johnson giving a presentation on the TEI conference (Tangible Embedded Embodied Interaction) they attended in Madeira. Including an overview of their favourite papers and demos and the studios they attended on the first day.

6th December 2010

Tom Collins and Vassilis Angelis giving an informal presentation on Ed Large's theory of
meter induction, and a general discussion on Pulse and Meter as Neural Resonance by
Edward W. Large and Joel S. Snyder. If time, a discussion also on 'Love is in the air': Effects of songs with romantic lyrics on compliance with a courtship request by Nicolas Guéguen, Céline Jacob and Lubomir Lamy.

30th November 2010

We will meet at 1 pm in the Pervasive Lab, where Anna Xambó will give us an informal demo of an early prototype of her TOUCHtr4ck democratic collaborative tool for creating music.

23rd November 2010

Rose Johnson will be showing us around her lab to take a look at some of her prototypes.

16th November 2010

Adam Linson giving a presentation entitled: A Plea for Unusability.

12th October 2010

Group review of journals and conferences relevant to Music Computing.

7th September 2010

Meeting to discuss and share our experiences over the Summer presenting at various conferences including ISMIR, SMC, ICMPC, CHI.

13th July 2010

Meeting to discuss changes to the CRC Music Computing web page. This page, along with all the HCI pages, will be updated shortly so this is an opportunity for us to make sure the information here is up-to-date, and accurately reflects what we're doing.

6th July 2010

Tom Collins leading a reading group discussion on Parsing of melody, Frankland and Cohen, 2004.

29th June 2010

Stefan Kreitmayer giving a presentation on Processing:

Processing is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing also has evolved into a tool for generating finished professional work. Today, tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.

22 June 2010

Vassilis Angelis leading a reading group discussion on:

Ed Large's "Resonating to Rhythm" (2008) essay.  Part of Ed's work regards computational models that simulate rhythm perception. Here are some of his concluding thoughts about aspects that influence rhythm perception and how those can be implemented into computational models:

"It appears that melodic patterns can contribute to a listener’s sense of meter and that listeners also respond differentially to various combinations of melodic and temporal accents (Hannon et al., 2004; Jones & Pfordresher, 1997) especially if the relative salience of different accent types are well calibrated (Ellis & Jones, in press; Windsor, 1993).

"If we accept that melodic and other musical accents can affect meter, then the significant theoretical question arises of how such information couples into a resonant system. Is it sufficient to consider accents arising from different features (for example, intensity, duration, pitch, harmony, and timbre) as combining into a single scalar value that determines the strength of each stimulus event? Probably not. The flip side of this coin is the effect of pulse and meter on the perception of individual musical events. Recall Zuckerkandl’s (1956) view of meter as a series of waves, away from one downbeat and towards the next. As such, meter is an active force; each tone is imbued with a special rhythmic quality from its place in the cycle of the wave, from 'the direction of its kinetic impulse.' It is, perhaps, a start to show that attention is differently allocated in time; however, it seems clear that future work must consider these issues."

15th June 2010

Andrew Milne leading a reading group discussion on:

Toward a Universal Law of Generalization for Psychological Science, Roger N. Shepard, Science, New Series, Vol. 237, No. 4820 (Sep. 11, 1987), pp. 1317-1323.

11th May 2010

Katie Wilkie presents:

Analysis of Conceptual Metaphors to Evaluate Music Interaction Designs

Katie Wilkie, Simon Holland, Paul Mulholland
Centre for Research in Computing
The Open University

Abstract
In domains such as music, technical understanding of the parameters, processes and interactions involved in defining and analysing the structure of artifacts is often restricted to domain experts. Consequently, interaction designs to communicate and manipulate musical information are often difficult to use for those without such specialised knowledge. The present work explores how this problem might be addressed by drawing explicitly on domain–specific conceptual metaphors in the design of music interactions, for example creating and manipulating harmonic progressions.

Conceptual metaphors are used to map image schemas, structures which are rooted in embodied sensory-motor experiences of space, forces and interactions with other objects, onto potentially unrelated, abstract domains. These conceptual metaphors are commonly, though not exclusively, identified through linguistic expressions in discourse and texts. Building on existing theoretical work, we subscribe to the view human understanding in music and other domains is grounded in conceptual metaphors based on image schemas. We hypothesise that if we can identify the conceptual metaphors used by music experts to structure their understanding of specific domain concepts such as pitch, melody and harmony, then we may be able to use these conceptual metaphors to evaluate existing music interaction designs in terms of how they afford or inhibit their expression. We further hypothesise that it may be possible to use the results of these evaluations to inform the design of music interactions such that they may better support musicians’ understanding of the domain concepts. In this way, it may be possible for users of such interaction designs to exploit the pre-existing embodied knowledge shared by all users, and to lessen the requirement for specialist domain knowledge, formal reasoning, and memorisation of technical terms.

Recently, the conceptual metaphor approach has been applied to areas including the analysis of music theory, the improvement of user interface design and, to a limited extent, music interaction designs. However, to the best of our knowledge, the present work is the first attempt to use the conceptual metaphors elicited from a dialogue between musicians as a means to evaluate existing music interaction designs focusing on the communication of harmonic, melodic and structural relationships.

27th April 2010

A jam session in the Music research Studio—all instruments/devices and abilities welcome—all types of music or noise applicable!

16th March 2010

Vassilis Angelis presenting:

Digital Mirrors is an interactive installation designed for body-centered video performances. It extends the idea of using a mirror as a metaphor for reflective investigation, space alteration and fragmentation. The technical implementation of the installation employs a range of software (e.g Isadora, Arduino IDE) and hardware (e.g Wii controller, Arduino Board) technologies, which will be the main focus of the presentation. A short reference to the theoretical context of Media Arts will be presented in the beginning.

2th March 2010

Andrew Milne and Tom Collins giving An Introduction to MATLAB.

22nd February 2010

Andrew Milne giving presentation at Music Research Seminar:

Tonal music theory—a psychoacoustical explanation?

From the seventeenth century to the present day, tonal harmonic music has had a number of invariant properties: specific chord progressions (cadences) that induce a sense of closure; the asymmetrical privileging of certain progressions; the degree of fit between pairs of successively played tones or chords; the privileging of tertial harmony and the major and minor scales.

The most widely accepted explanation (e.g., Bharucha (1987), Krumhansl (1990), Lerdahl (2001)) has been that this is due to a process of enculturation: frequently occurring musical patterns are learned by listeners, some of whom become composers and replicate the same patterns, which go on to influence the next “generation” of composers, and so on. Some contemporary researchers (e.g., Parncutt, Milne (2009), Large (in press)) have argued that these are circular arguments, and have proposed various psychoacoustic, or neural, processes and constraints that shape tonal harmonic music into the form it has actually taken.

In this presentation, I discuss some of the broader music theoretical implications of my recently developed psychoacoustic model of harmonic cadences (which has had encouraging experimental support (Milne, 2009)). The core of the model is two different psychoacoustically-derived measures of pitch-based distance between chords (one modelling “fit”, the other “voice-leading distance”), and the interaction of these two distances to model the feelings of activity, expectation, and resolution induced by certain chord progressions (notably cadences). When a played pair of chords have a poorer fit than an un-played comparison pair, that is also voice-leading-close, it is reasonable to assume the played pair is heard as an alteration of the comparison pair. This is similar to how a harmonically dissonant interval (e.g., the tritone B–F) is likely to be heard as an alteration of a voice-leading-close consonant interval (e.g., the perfect fourth B–E, or the major third C–E).

I explore the extent to which the model can predict the familiar tonal cadences described in music theory (including those containing tritone substitutions), and the asymmetries that are so characteristic of tonal harmony. I also compare and contrast the model with Riemann’s functional theory, and show how it may be able to shed light upon the privileged status of the major and minor scales (over the modes), and the dependence of tonality upon triadic harmony.

9th February 2010

Andrew Milne giving the presentation Microtonal Music Theory:

Microtonality is a huge and diverse area—I will be focussing on the use of microtonal well-formed scales that embed numerous major and minor triads. Such scales cannot be played in any conventional Western tuning (so they really are novel and different), but they also generalise many of the most important properties of the standard Western diatonic (major) scale (so they may provide a fertile resource for musical experimentation).

I'll also demonstrate a Thummer—a button-lattice MIDI controller that makes the playing of microtonal well-formed scales as straightforward as playing standard Western scales.

15th December 2009:

Katie Wilkie and Tom Collins giving the following presentations:

Katie Wilkie

Technical understanding of the processes involved in creating and analysing artifacts in abstract domains such as music is often restricted to domain experts with specialist knowledge. Consequently, those who do not have this specialist knowledge often find the user interfaces of software designed to convey information about the structure of these artifacts difficult to use. Our work explores how how we can address this problem in music interaction designs by drawing on domain-specific conceptual metaphors.

Conceptual metaphors, often identified through linguistic expressions, are used to map experiences of prior sensory-motor experiences onto abstract domains. This process enables us to understand complex concepts such as pitch, tempo, rhythm and harmonic progression in terms of embodied experiences of space, force and interactions with other bodies in our environment.

We hypothesise that if we can identify the conceptual metaphors used by music experts to structure their understanding of musical concepts, then we may be able to systematically improve music interaction designs to better reflect these conceptual metaphors and lessen the requirement for specialist domain knowledge. Conceptual metaphor theory has been applied to a number of domains including music theory and, separately, user interface design. However, to the best of our knowledge this work is the first to combine these distinct bodies of research.

Tom Collins

A metric for evaluating the creativity of a music-generating system is presented, the objective being to generate mazurka-style music that inherits salient patterns from an original excerpt by Frédéric Chopin. The metric acts as a filter within our overall system, causing rejection of generated passages that do not inherit salient patterns, until a generated passage survives. Over fifty iterations, the mean number of generations required until survival was 12.7, with standard deviation 13.2. In the interests of clarity and replicability, the system is described with reference to specific excerpts of music. Four concepts—Markov modelling for generation, pattern discovery, pattern quantification, and statistical testing—are presented quite distinctly, so that the reader might adopt (or ignore) each concept as they wish.

7–9th December 2009:

Entrainment Seminar in the Department of Music.

1st December 2009

Rose Johnson will be talking about her work with the motion capture study for violin players.

24th November 2009

Vassilis Angelis on: The use of a computational system for real-time (live) interactive musical performance that extends musical creativity by using and adding elements on traditional instrumental performances. To do so, it uses gestural, sensor, spatial and other technological modes for capturing elements of performance which are then mapped to create an additional performing layer. The motivation for this research is the desire to approach the creation of musical performances in a new way, to re-think traditional approaches of composition and performance, and to allocate to a single performer a series of control parameters to create a new performing environment in which he cooperates with a computational system.

16–17th November

Audience, Listening and Participation interdisciplinary workshop in the Department of Music.

10th November 2009

Andrew Milne presenting the use of metrics and Gaussian smoothing (of discrete data) in modelling of music perception.

3rd November 2009

Tom Collins leading a reading group on viewpoints.