Multimodal Listening as Technologically-Mediated Embodied Musicking

 

Lauren Hayes, Xin Luo, Kathryn R. Pulling, Assegid Kidane, Gabriella

Isaac, Dominic Bonelli, Rhiannon Nabours, Kiana Gerard.

 

 

Abstract

 

This paper discusses an experiential approach to understanding listening as a multimodal phenomenon that draws upon three disparate but related research projects: the creative music practice of the first author, quantitative evaluation of vibrotactile technology within speech and hearing science aimed at improving music perception for cochlear implant users, and a qualitative study involving the musical experiences of people who routinely wear a variety of assistive hearing technologies. It proposes that listening is an active, embodied process, taking place within the diverse sensorimotor, sociocultural, and aesthetic relationships between listeners and their worlds, and explores this assertion through the lens of creative practice research. These ideas are developed via observations of musical listening experiences that are technologically-mediated in ways which specifically incorporate and emphasize both the sonic and the tactile, but which do not aim to simply substitute one sensory modality for another. This project describes how creative practice research within the arts can offer ways of knowing and sense-making that go beyond approaches that comfortably fit into quantitative, qualitative, or conceptual research methods. Moreover, it is intentionally interdisciplinary in its methodologies with the goal of illuminating how research approaches that work in tension, often with incommensurate agendas and divergent understandings of a common topic, can be productively inventive.

 

 

Acknowledgements

 

It is typical to include any acknowledgments at the end of an academic paper. However, I have chosen to preface this article with this short preamble in order to situate the contents of this work and the contributions that have made it possible. These acknowledgements, in this sense, “are both an instruction to an intellectual product and a reconstruction of the external contributions which have gone towards its realization” (Ben-Ari 1987, 65). This project, which explores technologically-mediated listening through sound and touch, incorporates methodologies from musical creative practice research (CPR), as well as qualitative and quantitative methods involving interviews, surveys, and experiments in the field of speech and hearing science. The main collaborations involved stem from an encounter at a ‘research mixer’ at Arizona State University (ASU) between myself (Hayes) and hearing scientist Xin Luo. Luo’s expertise in speech and music perception is uniquely grounded in his extensive background working with cochlear implant (CI) users who are Mandarin-speaking, where the complexities of tonal patterns demand careful discernment when designing perceptual studies related to speech (see, for example, Luo, Fu, Wu & Hsu 2009).

 

Kathryn Pulling was a research specialist within Luo’s Auditory Implant Lab at the College of Health Solutions, ASU, who helped facilitate and run many of the tests and interviews. Assegid Kidane is an engineer within the School of Arts, Media and Engineering, ASU. He designed, prototyped, and fabricated custom circuit boards for the latest version of a modular haptic wearable as part of this project. While a student at ASU, Gabriella Isaac maintained and developed digital signal processing solutions for translating audio into control data for haptics. She also conducted many of the interviews in the study. Dominic Bonelli maintained the haptic technology and also developed his own software strategies for exploring both speech and music through haptic sensation in partial fulfillment of his undergraduate honors project. Rhiannon Nabours worked on prototyping modular, haptic wearables as part of her undergraduate degree in Digital Culture. Kiana Gerard quantitatively evaluated the vibrotactile technology combined with CI for music perception while taking the Advanced Research Experience Seminar as an undergraduate student in Speech and Hearing Science. Although the first-person point of view will permeate much of what follows, this project would not have been possible without the contributions of my co-authors.

 

Introduction

 

Listening, construed as a perceptual human activity, can be mediated in a variety of ways. The auditory experience of space, for example, is conditioned not only by the physical materials, structures, and layout of a space that determine its physical acoustic properties, but also by what has been described as “perceptual acoustics, and cultural acoustics” (Blesser & Salter 2009, 230). Such configurations make distinct categorieswhose boundaries can readily be obfuscatedbetween the scientific, the cultural, and the phenomenological experience of space through sound. Musical listening, Gordon: This description of "musical listening" implies that "music" is an object that is mediated, rather than an event that is performed (without an intermediary medium beyond air pressure waves or vibrations in other physical substances). I wonder if it would be helpful for the authors to clearly state what "music" means in this article—is this only about recorded music?

Hayes: This is perhaps more implicit in the Kassabian. Of course musical activity (music as dynamical relationships) can occur within digital-physical processes (not just air pressure waves/physical substances). I will also clarify that the 'recorded... material' is not necessary for 'music'.
similarly, is mediated through a variety of technologies, including the technologies of what Anahid Kassabian describes as “ubiquitous music” (Kassabian 2013, Younge: If I remember correctly, this was discussed with somewhat of a negative connotation. It may be worth indicating this in some capacity. Ubiquitous listening may be considered non-consensual listening, or a listening that renders us passive. These technologies are therefore not necessarily assistive beyond any capitalistic sense.

Hayes: Yes, it can be. Her point is more that the forms of listening she raises haven't been taken seriously enough in terms of affective/attention modulation, etc. I will elaborate on this in the text, thanks.
xii)
music that is embedded within everyday livingthat enable sound and music to permeate the spaces we occupy, and follow us as we move through our environments, often surreptitiously. These include the technologies that play back recorded sonic materialsuch as wax cylinders, record players, portable cassette players, MP3 players, and mobile phonesas well as the apparatuses that enable analogue or digital signals to be made audible, ranging from acoustic horns, loudspeakers, headphones, earbuds, assistive listening devices (ALD) such as inductive loop systems, to medical devices which include hearing aids, and more recently, cochlear implants. Whether on the fetishization and nostalgia of the medium in the case of analogue equipment (Stuhl 2014), or on the limitations of music perception for CI wearers (Limb & Roy 2014), much has been written regarding the multifarious ways in which technology mediates listening.

 

In 2005, Eric Clarke offered one of the first attempts to synthesize a new theory of music perceptionor rather, of listeningthat was explicated through an approach adapted from the ecological psychology of James J. Gibson (1979). For Clarke, listening is an active, embodied process in which meaning is elicited within the opportunities for action that soundsand, by extension, musicafford (Clarke 2005). This relies neither on a semiotic analysis of music, nor solely on cultural factors. It is the highly structured perceptual capacities of the listener, which are coupled with their environment, that gives rise to such possibilities. For Clarke, this mutuality means that “perception must be understood as a relationship between environmentally available information and the capacities, sensitivities, and interests of a perceiver” (Clarke 2005, 91). Clarke’s work derives heavily from Gibson’s notion of affordances, a concept developed to explain such relational potentials that exist between animalsor indeed humansand their environments (Gibson 1979). Affordances may ‘furnish’ the perceiver in a positive or negative manner, depending on which action is taken, but regardless, they are always perceived based on the animal’s capabilities. An acoustic piano affords playing but also affords burning. In the latter case, the effects may be harmful or actually life-sustaining, such as in extreme cold, for example.

 

The discourse surrounding the idea of musical affordances is complex and at times conflicting (see Menin & Schiavio 2012). Moreover, accounts that integrate ecological perspectives with approaches from enactive cognition may be more helpful in acknowledging the diachronic role of social interaction and group dynamics within musical activity (see, for example, Hayes & Loaiza 2022; Loaiza 2016). Nevertheless, Clarke’s proposal brings attention to the task of addressing and overcoming “the interrelated dichotomies of subjects/objects, passive/active listening and autonomy/heteronomy of musical experience” (Menin & Schiavio 2012, 205). It allows for listening to be construed within the operative activities of musicking (Small 1998), which encompass both musical listening and music making, along with a host of other typically under-acknowledged and often obscured care and maintenance-related activities such as the cleaning of music venues, for example (see Small 1998 for further discussion). Moreover, perception is direct (Gibson 1979)rather than requiring internal ‘computational’ cognitive processingand is the result of moving within and through our environments: we orient our heads towards a sound, the proximity of sounds appears to change as we walk, and in turn, our movements are guided (perceptually) by what we hear. The idea of an ecological approach to listening also helps to reveal apparently ‘neutral’ modes of listening as culturally conditioned, such as the silent and stationary listening enforced through unspoken etiquette within the western classical music concert format (Small 1998).

 

Listening as ‘doing’ music, then, is active, embodied, multimodal, situated, relational, specific, social, and cultural. In what follows, I will argue that thinking through and, moreover, facilitating forms of technologically-mediated listening should also engage with these themes. This is crucial given the ubiquity of such mediating technologies, which are often so seamlessly embedded in our environments that what they actually do for the listener is rarely called into question. An innocuous example is the masking role performed by loudspeakers in restaurants. Marie Thompson discusses more sinister applications of both individual and collective control via a specific class of technologies that she describes as “sonic weapons… [that] seek to diminish their affective power: the capacity to, and ways in which a composite body can, act and be acted upon” (Thompson 2017, 74). I make the case for embodied and multi-sensory techno-musical approaches to listeningspecifically, that is, understanding listening by the very doing of itby discussing and drawing connections between three disparate but related research projects: firstly, my own performance practice involving live electronic musical improvisation and haptic technology; secondly, a collaboration with speech and hearing scientist Luo’s Auditory Implant Lab. Here, the same haptic hardware was expanded within a lab setting to evaluate its potential for improving music perception for CI users through a series of quantitative scientific experiments in which vibrotactile feedback was correlated with melodic contour. Thirdly, I worked adjacently with the lab to qualitatively examine whether and how the various needs and desires for musical listening of people who routinely wear a variety of assistive hearing technologies, including CIs, might manifest through audio-haptic technologies using a series of interviews, rapid prototyping sessions, and follow up surveys. During this latter phase, the same technology was again repurposed to co-create, explore, and generate mediated listening experiences with participants. Yet in this, rather than seeking to measure ‘improvement’ of musical perception, I wasperhaps here at odds with the original ethos of the collaborationfundamentally interested in uncovering what musical listening might encompass for the participants through these more informal and socially-situated scenarios.

 

Considered as a whole, this project iterates, often non-linearly, through various timelines, formations, publics, and collaborations. The goal of this work is not to offer a rigid taxonomy of audio-haptic aesthetic listening experiences (see Hayes & Rajko 2017 for further discussion), nor is it to develop innovative and marketable medical apparatuses. Indeed, as Mack Hagood, writing about disability and tinnitus emphasizes, media technologies can help us to imagine “more liberating forms of biomediation” (Hagood 2017, 327) if considered in ways other than from techno-solutionist perspectives. Advocating CPR methods to produce newor bring attention to neglectedways of knowing and understanding, and orienting instead towards “the development of fields and initiatives in which new kinds of autonomy are defended against a reduction of research to questions of accountability or innovation” (Barry, Born, and Weszkalnys 2008, 23), this work explores what it means to do multimodal listening that is technologically-mediated from a variety of perspectives. It asks: what are the ways in which listening can be understood as highly embodied, and therefore uniquely specific to our particular physiologies, aesthetics, behaviors, cultural milieux, ways of living, and so on? And from this, how can technology serve to facilitate imagined, desired, unknowable, or as-yet-unrealized experiences?

 

The project is deliberately pluralistic in its approach and juxtaposes ideas, Gordon: Why does it do this, and what is the nature of this juxtaposition? Is it dialectical—i.e. does it seek a synthesis of two opposing argument? Or does "pluralistic" suggest a more relativist approach?

Hayes: As per a slight expansion of modes of interdisciplinarity later on, I hope this is clearer (i.e. not 'synthesis' model but rather opening up of possibilities and understandings of knowledge domains - Barry/Born go further into this re. ontologies although that goes beyond the scope of what I have space to address here).
and methods that often sit uncomfortably together. For example, while the concept of affordances was coined by Gibson, it was also claimed and repurposed within the fields of interaction design and human-computer interaction (HCI) through the work of, for example, design scholars Donald Norman (1988)
who uses it to describe only those opportunities for action that are actually perceived by a userand William Gaver (1991)who further delineates various highly specific categories of affordances. The role of the creative practitioner is not necessarily to untangle these differences, but rather to explore the possibilities that such variations can offer. While the collaboration with speech and hearing researchers was motivated around questions of improving musical listening experiences of people who wear CIs, it became evident that this project sits within a wider domain: Younge: Are you planning to mention disability at any point? It almost seems like the elephant in the room. Embodiment is a very, very big topic in disability/music studies. The theorist, Joseph Straus, in particular, writes a lot about this. Or maybe you want to mention that you will not be taking that route?

Hayes: Have written about disability/music elsewhere with more space/scope to engage deeply, and I agree with you on the relationship to embodiment (core element of enactivist approaches). But this is a great suggestion, given Straus' desire to expand notions of hearing to a much broader range of activities than what he calls 'normal hearing'. Will work this in, thanks.
how understanding listening more broadly can be informed by applied techno-musical CPR which engages those whose hearing is habitually mediated, conditioned, and often clinically investigated in relation to technology. As Joseph Straus notes in his discussion of “disablist hearing" (Straus 2011, 160), a phenomenon that he defines as culturally and socially created and shaped, “the range of human hearing is wider than generally recognized
the boundary between normal and abnormal hearing is a construction, a fiction. We cannot begin to dismantle that wall until we can define better what lies on either side of it” (Straus 2011, 180).

 

Proliferating a Personal History of Audio-Haptic Listening

 

My own history of exploring auditory-tactile relationships through haptic technology, which began in 2009, emerged directly out of my phenomenological experiences of learning to design, build, and perform with hybrid instruments. These have included investigations working within acoustic, analogue, electronic, and digital sonic domains[1]. Rather than coming to haptics through the lure of what David Parisi has described as “fetishistic claims of novelty mobilized around haptic interfaces” (Parisi 2018, 34), I embarked on a trajectory Gordon: Prefacing one's own personal narrative with a critique like this is a risky rhetorical move, because it primes the reader to look for this critique before they have read your own personal narrative. What work does this critique bring to this section of the article?

Hayes: It's a recurring theme in haptic tech development. The promise is remade every few years (Parisi & others... ) My own work sought potentially overlooked possibilities in the tech, rather than (hi-)tech development for its own sake or that over-promises results/experiences. Will try to elaborate.
that would sprawl and span over a decade of CPR directly out of my very concrete and jarring experiences of performing live electronic music for the first time in 2007[2]. In this early live electronic performance work, it was the very profound sensation of an altered multimodal listening
specifically, the lack of resonating instrumental feedbackthat initiated my inquiry. Having spent over twenty years prior to that using the vibrational feedback felt through my body and the instrumental body of the piano, predominantly, but also the guitar and through singing, I had developed a nuanced sensitivity for listening involving touch, which I used continuously during performance. This, of course, is quotidian for all musicians who sing or play acoustic instruments. Yet I was only fully cognizant of this “habituated background of bodily experience” (Paterson 2007, 21) when, with electronic instruments, it was largely absent. Certainly, loudspeakers vibrate as they produce sound. However, they are not typically embedded within hybrid instruments[3], and even those which are distal to the performer only offer minimal and indirect tactile feedback.

 

The first version of a haptic wearable that I developed (Hayes 2011) used an Arduino[4]an open-source electronics platformand eccentric rotating mass (ERM) vibration motors, the same as are deployed inside pagers and mobile phones to provide haptic alerts. The ERM motors were attached to a glove that I wore on my left hand. This was my earliest attempt to develop a more embodied[5] relationship to computer-based instruments, where I might experience the digital sound that I was creating and transforming during performance both audibly through loudspeakers, but also via the haptic sensory modality, through vibration applied directly to my skin. While the origins of modern haptic research can be traced to functional military and communication implementations (Parisi 2018), prior related research Gordon: I feel like this paragraph would also perhaps benefit from a discussion of previous examples of haptic interfaces developed by experimental musicians/instrument designers—specifically Laetitia Sonami's "Lady's Glove" (1991–)

Hayes: Sonami's Lady's Glove is undoubtedly sophisticated, gestural, and embodied, etc. However, it offers neither force-feedback nor vibrotactile feedback to the performer (in the way that the other examples I mention do, and as such are categorized as examples of 'haptic interfaces' in the literature). I have written elsewhere about the importance of not only musicians/instrument designers' perspectives, but specifically performers' here (when I started, most examples were lab-based demos). Seems beyond the scope of this work given the examples I do include.
in the fields of computer music and musical HCI has explored developing haptic feedback typologies for gestural performance devices (Rovan & Hayward 2000), and aesthetic approaches for tactile composition (Gunther & O’Modhrain 2003). My own work provided an account specifically from the musician’s perspective. My early explorations used simple and intuitive mappings between sound and sensation: for example, a measure of overall amplitude could be correlated with vibrational intensity (Hayes 2013). This felt direct, immediate, and private. And while there were limitless mapping possibilities to explore, this rudimentary starting point provided enhanced and meaningful relations between myself and the instrument. I could more readily feel-hear, or more simply put, listen to what I was doing, and what I was creating.

 

Figure 1. Members of McFall’s Chamber performing with an early prototype of the wrist-worn haptic wearables at Heriot Toun Studio, Scotland, 2013. These were modified by some of the performers and then worn around the ankle after they experienced interference with bowing sensitivity.

 

Norman’s (1988) definition of affordancesthose only readily perceived[6]is useful here to consider how, through a reorientation of perspective, new opportunities for actionin this context, uses for the audio-haptic listening devicemight appear. It was only through collaborative exploration with other musicians that ideas emerged about, for example, how to wear the device, given that physical feedback is felt in a variety of ways depending on who is playing. Wrist worn haptics, for instance, were found to be overpowered by acoustic vibration in the case of percussion, and string players similarly found this positioning to interfere with bowing technique (see Figure 1). Here it is not solely the unique physiology of each performer, but also how this is coupled with their instrument that determines what the technology affords them. Later, following Eric Gunther and Sile O’Modhrain’s (2003) research into tactile aesthetics within composition, I built on the embodied knowledge I had fostered through my own performance practice to develop audio-haptic installations where audiences could listen to a piece of music through both the auditory and haptic modalities[7]. This required experimenting with the ways in which the acoustic as well as the affective aspects of the music could be explicitly augmented, provoked, diminished, mimicked, or represented through physical sensation. One of the most recent developments in this work involved creating new aesthetic experiences by augmenting my own live electronic musical improvising with the real-time performance of low-frequency haptic sensation that was felt as vibration through audience seating risers (Hayes & Rajko 2017). Each incremental development was reciprocally informed by the responses of participants and audiences, which were always richly varied and often highly specific.

 

A further instructive expansion of this personal history of audio-haptic listening from the CPR perspective was facilitating a series of workshops where people could learn about and experiment with the technology involved. The workshop can be a creative milieu that facilitates not only novel and experimental experiences for participants, but also can function as a space of ideation and learning for those who coordinate and implement it. I have hosted numerous and varied music-related workshops over the last fifteen years; while the communities that I have worked with range from professional musicians, to babies, to people with profound and complex needs (Hayes 2015), in each case the workshop has the potential to be “an interruption in time that disrupts and shatters prior ways of making sense of the world; Gordon: This is one of the first instances where the author has made an explicit argument about the value of the work—I think this should be foregrounded much earlier in the article. I also think that it will be important for the Authors to address why interrupting time, disrupting/shattering prior ways of making sense of the world, etc. are intrinsically valuable activities.

Hayes: OK, good point. Again, this is so implicit to me in doing practice based work that it is good to know it should be pointed out for clarity. Actually, I go into more detail on this later, but I will foreground it earlier too.
it is a calling for new modes of experience and different forms of judgement” (Higgins 2008, 334). Workshopping three prototypes of a more sophisticated audio-haptic system
using Bela[8] microcomputers within a wireless networkwas a useful way to observe responses, iterate variations, and explore new potential listening scenarios in community. A total of six workshops were conducted at a variety of semi-public venues[9]. The latter two workshops were in collaboration with dance, somatics, and technology scholar Jessica Rajko (see Hayes & Rajko 2017). Participants experimented with listening to recorded music as well as their own playing or singing using both wearable haptic devices and larger tactile transducers attached to clothing, or placed on their bodies, the floor, beanbags, or other furniture. A variety of different models that would transduce sound into sensationvia real-time software analyses of musical materialwere explored.

 

John Drever makes the case for what he terms “auraldiversity” in a variety of contexts, from acoustic regulations (2017), to sonic arts practice and discourse (2019). He calls for a more “auralinclusive approach” (Drever 2017, 6) towards, for example, people with hearing loss, CI and hearing aid wearers, along with people with dementia or post-traumatic stress disorder. This is couched in the desire for the wellbeing of a “heterogeneous and aging population” (Drever 2017, 6). He urges acousticians, engineers, and designers to move “beyond the ‘gold standard’ of equal loudness curves” (Drever 2017, 6) for more inclusive design considerations, all of which is supplemented via his own experiences of “right-sided high frequency sensorineural hearing loss” (Drever 2019, 91)[10]. While all of this seems unarguably important, my experiences in the workshops indicated that much is possible in terms of technologically-mediated listening that is not necessarily grounded only in terms of health and wellbeing, as seems to be Drever’s framing. CPR as a methodology that can foster and facilitate interdisciplinary research has the potential to “generate knowledge practices and forms, and may have effects, that cannot be understood merely as instrumental, or as responses to broader political demands or social and economic transformations” (Barry, Born & Weszkalnys 2008, 24). For example, during a solemn moment at one of the workshops, a woman sang an a cappella version of a worker’s song. It could be felt by the other participants as both intense bass frequencies through the tactile transducers or perhaps as more intimate or fragile buzzing through the ERM motors on the hands. The affective moving and being movedboth emotionally and physicallyby her singing was powerful and, for want of a better term, palpable.

 

Participants in the workshops comprised techno-fluent electronic musicians, musicians who self-identified as hard of hearing, students and academics, people working in related areas of research involving vibration and the body, festival or conference attendees, and people who attended simply out of curiosity. After the initial introduction and contextualization of the work had been given and participants had familiarized themselves with the technology, they began open-ended experimentation—guided where needed—as a group (see Figure 2). My role as creative practitioner and facilitator, rather than as designer, allows for the emergence of “fertile ground for ‘future-producing’ moments that can be transformative, evoking possibilities of change immanent to any given territory, physical, mental and/or spiritual” (Higgins 2008, 333).

 

Figure 2. Participants experiencing the music of the group members through tactile transducers on beanbags and cushions, and through ERM motors positioned on the head, Moogfest, Durham, NC, USA.

 

 

Figure 3. Participants listening to music together via tactile transducers and materials at LOOP, Berlin, Germany.

 

 

Figure 4. Participants attaching haptic wearables to clothing and arms at the Yorkshire Sound Women Network in partnership with Electric Spring festival, Huddersfield, UK.

 

Some of the most common themes that emerged from these sessions included facilitating communal listening via the shared experience of sonic vibration through materials (Figure 3); embedding technology within or on clothing (Figures 4&5); listening in non-seated positions such as actively reaching for or touching materials, or lying on the floor or on cushions to use the full body to listen; musicians exploring augmented ways of listening to themselves sing or play an instrument via the haptic channelsin more than one case, a singer would place tactile transducers at their feet, mediating their experience of listening to their own voice; and using direct communal touch to share the experience of listening, often occurring through the hands (see Figure 6). What was evident within these various assembled spaces was the sense of exploration, playfulness, improvisation, and inventiveness that could be seen to emerge. These types of experiences produce ways of being and ways of knowing that rarely are given the opportunity to emerge within, for example, clinical trials of medical devices, or quantitative evaluations of technologies within musical HCI design. Furthermore, agency is distributed in terms of who gets to take on the role of investigator.

 

Figure 5. Wearing an haptically-augmented listening cap prototyped by a participant at Moogfest, Durham, NC, USA.

 

While it would be purely speculative to comment on how the ideas that were imagined, proposed, and borne out might augur in the participants’ futures, the technological mediation of listening that we explored together enabled participants to be inventive in terms of how they listened to, listened through, and listened together. 


Figure 6. Listening through shared touch at the Yorkshire Sound Women Network in partnership with Electric Spring festival, Huddersfield, UK. Image Credit: Eddie Dobson.

 

Technological-Mediation for Habitual and Bespoke Listening

 

While exploring the musical affordances of listening via the audio-haptic system through these workshop-based scenarios, I was concurrently using the technology in collaboration with Luo’s Auditory Implant Lab on a series of quantitative perceptual tests, as well as related qualitative interviews and prototyping sessions. Luo’s expertise lies in speech and pitch perception for CI wearers and auditory psychophysics related to CI technology. By working with researchers in adjacent yet unfamiliar disciplines, my hope was not only to learn from them, but to understand more about my own processes and techniques. Furthermore, entering into this interdisciplinary collaboration was an opportunity for me to observe the types of methodologies and protocols undertaken in a scientific approach to music perception within audiology, and to discern where my own knowledge and experienceas a practitioner and researcher working with soundcould perturb and problematize such methods.

 

From the medical perspective, as an assistive mediating technology, CIs have been used with high levels of successparticularly during the last three decadesto restore auditory[11] speech perception for severely to profoundly deaf people (Wilson & Dorman 2008). Traditional hearing aids simply amplify sounds, whereas CIs are complex devices that directly stimulate the auditory nerve using an internally implanted electrode array. Due to the limitations of the technical functionality of CIs, their success has not been matched in the case of music perception (Limb & Roy 2014), where “ the coarse temporal features of music Gordon: There seems to be an implication here that "music" can be divided into "coarse" and "fine" dimensions—with rhythm being "coarse" and pitch being "fine". To me, this produces friction with the Author's previously much more diverse, inclusive, and liberal definition of "music."

Hayes: Of course. Good point: I will make clearer in the text that this section is related the research led by the speech & hearing lab (i.e. 'hard sciences' approach). Part of the process of interdisciplinary research involves learning about methodologies, vocabularies, discipline-specific outcomes before any kind of critique/problematizing.
(e.g., rhythm, tempo, and meter) are well preserved [but] pitch perception that requires spectro-temporal fine structure cues is much worse” (Luo & Hayes 2019, 2). One commonly employed remedy to this is termed “bimodal listening” (Gifford & Dorman 2019, 501), an approach where the CI wearer utilizes a traditional hearing aid in the non-implanted ear if there is residual low-frequency acoustic hearing on that side. This additional electro-acoustic stimulation (EAS) can help in providing pitch cues
in some cases up to 1000Hz, more commonly peaking at around 500Hzwhich may include the fundamental frequency (F0) and other harmonics within this range (Luo & Hayes 2019). Interestingly, the upper limit of tactile perception via mechanoreceptors on the skin also falls within this range, and prior research has indicated that vibrotactile feedbackor electro-tactile stimulation (ETS)may also provide a way to improve pitch-related perception for CI wearers who do not have such residual acoustic hearing in the low ranges (Huang, Sheffield, Lin & Zeng 2017).

 

Building on this work, various clinical experiments and tests were designed and carried out based on the affordances of the audio-haptic wearable towards the goal of improving perception of pitch-related contentspecifically musicthrough vibrotactile feedback for CI wearers[12]. The studies which have been completed to date measured cutaneous vibration discrimination as well as melodic contour identification Gordon: Are these the criteria for quantitatively evaluating "music perception and enjoyment"? If so, why are melodic contour identification and musical interval differentiation the ones used? What about musics that do not invoke the concept of "melody" or "musical interval"?

Hayes: I have added this footnote. Of course, I agree: "Of course, as a sound artist, I am well aware of the limitation of thinking about musics in terms of melody or harmonic interval. As noted earlier, it was important to enter into the collaborative relationship from the perspective of generosity, to first understand current trends and techniques within disciplinary fields that were not familiar to me."
involving different levels of musical interval differentiation (see Luo & Hayes 2019 for full details). When musical intervals contained notes that exceeded or were situated outside the limits of tactile perception (> 500Hz), these were mapped back into a more useful haptic output range
via octave transposition. The study itself was small and the results indicated that the benefits of haptic feedback would vary across CI wearers, and that additional training would likely be required. Nevertheless, the findings indicated that “CI users without residual low-frequency acoustic hearing, vibrotactile stimulation may be a viable option to improve pitch contour perception” (Luo & Hayes 2019, 11).

 

The variety of methodologies employed within CPR often requires that the researcher take on a multitude of roles, these often being appointed externally, particularly when technical skills are valued. For example, I have frequently been asked to perform audio engineering tasks at concerts where I am playing, simply because I have a practice that involves technology. Echoing work in which I had been employed as a haptic ‘technician’ in 2012 on a project related to dance, aesthetics, and blindness (see Timmons & Ravenscroft 2019), in the quantitative CI studies, my role was similarly predominantly technical: this involved creating software models and building and maintaining the hardware system. Andrew Barry and co-authors define three modes of interdisciplinary research, where this type of disciplinary division of labor is termed “service-subordination” (Barry, Born & Weszkalnys 2008, 28). Interestingly, they note that it is often scientists who provide this service for artists with technical expertise, equipment, and facilities. Another mode, which is seen as being more additive in terms of the disciplinary work of various individuals, is described as “integrative-synthesis” (Barry, Born & Weszkalnys 2008, 28). While incentivizing infrastructures such as funding narratives often usher researchers towards what appears to be the additive or integrative mode of interdisciplinarity, in practice, barriers such as rigid methodologies, disciplinary expectations, or career trajectories can often inhibit this from transpiring; or worse, it becomes “performative” (Barry, Born & Weszkalnys 2008, 28). The third “agonistic-antagonistic” mode of interdisciplinarity described is “conceived neither as a synthesis nor in terms of a disciplinary division of labour, but as driven by an agonistic or antagonistic relation to existing forms of disciplinary knowledge and practice” (Barry, Born & Weszkalnys 2008, 29).

 

Despite belaboring the issues surrounding interdisciplinary research, Barry and co-authors suggest that working in this third mode can provide an opportunity to yield the most inventive work (Barry, Born & Weszkalnys 2008). In this vein, I began to investigate how the collaborative research milieu could perform heterogeneously as a site for exploring technologically-mediated multimodal listening opportunities adjacent to the goals of the clinical setting. This was done in order to more usefully leverage my own expertise within the creative arts and to uncover what might be getting lost or obscured in such quantitative methodologies. But while, then, my objectives were not necessarily serving the clinical paradigm, neither was technological development a requisite.

 

The idea of user-centered design developed out of Norman’s work (1988), which recognized the importance of considering the desires and needs of technology users. Broadly put, user-centered design processes are ones in which “end users influence how a design takes shape” (Abras, Maloney-Krichmar & Preece 2004, 763). While my research often deals with interactions between humans and computers, the language of HCI has always felt uncomfortable to me as a musician: I playrather than usethe piano, and the same goes for my hybrid instruments. Furthermore, my audiences and workshops attendees are people, and not ‘users’. Despite the good intentions of user-centered design, Susan Gasson (2003) discusses the issues with this approach in HCI, where she suggests that even some of the more recent developments such as participatory design have often failed where a “goal-driven technology focus” (Gasson 2003, 39) has persisted. Gasson’s specific formulation of what she calls human-centered design approaches focuses on the ‘hows’ and ‘whys’ of technology rather than the technology itself. This, she suggests, foregrounds the actual process of inquiry rather than “technical problem-closure” (Gasson 2003, 30).

 

Within the ethos of this type of exploratory praxis, and wanting to move beyond the confining technician role, I worked adjacently with the Auditory Implant Lab to examine whether and how various habits and potentially unexpressedor, rather, unacknowledged within a research settingneeds and desires for musical listening of CI wearers might manifest through audio-haptic technologies using rapid prototyping sessions, as well as qualitative interviews and surveys. Participants were recruited via flyers distributed to several organizations including the Hearing Loss Association Arizona (HLAA), the Bionic Ear Association (BEA), the ASU Disability Resource Center (DRC), and the Arizona Commission for the Deaf and the Hard of Hearing (ACDHH). Additionally, the project and a demonstration of the haptic technology was presented at HLAA (Sun Lakes Chapter) to encourage recruitment. Thirteen individuals were interviewed in the first phase, and six[13] participated in the second prototyping/survey phase. In the second phase, participants listened to recorded music via a selection of audio-haptic scenarios informed by the responses to the initial interviews. Ages ranged from 24 to 76, with the mean age being 59. None of the participants identified as professional musicians, but some had played musical instruments for many years.

 

Ethical approval was obtained via the Institutional Review Board of ASU. All participants gave informed consent and were compensated for their participation. Interviews were recorded, transcribed, read for familiarization, and analyzed for content using thematic analysis (Braun & Clarke 2006). Discussions of listening habits, history of playing musical instruments, and perception of environmental and ambient sound were coded to evaluate the conversations. From the perspective of the original collaboration with the lab, the aim of this work was to evaluate for efficacy, informing how the design of the audio-haptic wearable could be expanded within the context of those who listen to music through the mediation of CIs and hearing aids[14]. In a more critical manner, the themes analyzed from the interviews and surveys presented here provide an array of phenomenological insight on how musical listening takes place via these devices beyond the clinically agreed on and widely recognized technological successes and limitations (McDermott 2014). The results also offer provocations for creatively imagining audio-haptic listening scenarios. As such, the participants are quoted directly and liberally.

 

Phase 1: Interviews

 

Speaking of their experiences of musical listening in general, one of the first things many of the participants commented on was the role of rhythm in terms of facilitating the structure of how they expected music to transpire and its connection to other musical properties such as melody, for example: “The rhythm helps me stay in tune, it sets up an expectation in my brain as to what the sound is going to sound like; like I can follow a pattern and so I think it provides me with a structure” (Participant 2). Similarly, rhythm was reported to help with attention and lyric recognition: “Beats, I love those catchy beats, you know the ones that like, especially because its easier to, for some reason, it’s easier for me to pick up on the words when they have a strong beat, so yeah I prefer those sounds” (Participant 3); and in connection to movement “I like to play the drums because I can hear the beat, but I haven’t really learned to play the drums, just a kind of a beat, movement” (Participant 5).

 

Interviewees also discussed the relationships between instruments, timbre, their unique hearing profile, and their musical preferences based on this: “Some of the horns I can hear okay, drums of course because you feel that, so I think I prefer more of like that, the lower sounds, like drums or horns, lower horns” (Participant 11). In some cases, responses described the complexity of the links between their own response to frequency and the limitations of the mediating technology in musical listening. For example, one participant noted: “I think I know that I don’t like real high pitches, I don’t like sopranos, and I think it’s because I just don’t have hearing in that area and to me it just doesn’t sound good, it doesn’t sound right” (Participant 2); but later remarked: "I love Barbra Streisand… I love her because she can also... she doesn’t stay at the highs all the time” (Participant 2). This indicates that even if sound in a particular frequency range can be heard, it is not necessarily perceived as useful, pleasant, or meaningful.

 

Familiarity with the music being listened to, the listening experience itself, and the role of memory was also crucial: “if I go to concerts, I will usually take out my hearing aids and put in ear plugs because it’s more like an experience. I know the music… it’s more like the experience” (Participant 2); and many participants simply declared that “new music is harder to follow” (Participant 1). In terms of memory, musical enjoyment came not simply from being able to hear all the acoustic information present during, for example, a concert. Rather, strong familiarity with musical content could fill in enough of the gaps and evoke a sense of belonging: “We know that music, so it’s comfortable. You’re not really hearing every bit of it, but enough of it. You know where it’s at, you know the words, so you feel part of it” (Participant 7). A majority of interviewees reported going to concerts where, of course, performances are experienced multimodallyaurally, visually, tangibly, and actively.

 

While much of the literature on CI research discusses developments in the areas of speech and non-speech sounds, where the latter has typically focused on music (McDermott 2014), the responses raised the issue of what this binary might obscure: “if you consider the birds as music… I really enjoyed hearing the birds when I got my hearing aids. I had lost that” (Participant 8). Many participants discussed in detail their pleasure in listening to environmental sounds, particularly birdsong, which in most cases had been restored as a consequence of wearing CIs: “I love the birds… I always see them but I never could really hear the music and now I enjoy it so much more” (Participant 7); “I noticed the birds have different, tunes… Sometimes they sing, sometimes they fight. I can tell the difference now. I could not before my implant… I appreciate them because some of them I didn’t hear them as well before my implant… I think they’re great” (Participant 1).

 

While many participants commented on the positive effects that had been experienced in terms of their ability to listen to music after receiving their assistive technologies“now since I’ve had the cochlear implant, I enjoy music more. I don’t listen to it a lot, but before I would never turn my radio on in the car… I do things like that that I used to not” (Participant 13)others discussed various prevailing issues around musical listening: “She’s put a music setting in these things. I don’t find it clarifies at all, it just makes it louder, that’s all. But it doesn’t really make it clearer” (Participant 11). Many of the interviewees had unilateral implants, meaning that their hearing profile differed significantly on the left and right sides, leading to spatial imbalances: “I miss stereo cause I remember back listening to the Beatles and you’d hear the guitar over here and the saxophone or something, the voices over here. Now it’s all going to one ear” (Participant 12).

 

These responses offer insight into the varied and specific listening behaviors, habits, preferences, and sensitivities of a small set of individuals. While speech and hearing research typically involves the measurement of a specific responseduring listening to either speech or to musicwith the goal of optimizing technology for efficacy, the interviewees’ responses demonstrated that this was far from being universally achieved. Furthermore, where they did report positive experiences of musical listening, these tended to involve additional factors that are not typically considered or easily accounted for within clinical perceptual studies. This included, for example, the relationship between music and memory, as well as the situated sociocultural aspects of music. Furthermore, the responses highlighted the range of complex technological mediations involved in the participants’ habitual listening situations. In fact, there are numerouspotentially unacknowledgedstages of mediation involved while, for example, listening to a radio broadcast with CIs, listening to a live band through a public address system with hearing aids, or engaging in bimodal listening for a concert on the television.

 

Phase 2: Prototyping Audio-Haptic Listening Scenarios


Meri Kytö offers a long-term ethnographic study of an adult adapting to listening through new bilateral implants through which she argues that “regularisation changes the object, sometimes radically, and should thus be confronted as a technique of disciplinary power” (Kytö 2022, 10). In a manner that echoes Kassabian, Kytö emphasizes that any technological intervention or mediation is never neutral, and choices of algorithms, code, hardware, and so on, involve assumptions and can make obfuscations with regards to agency (Kytö 2022). Moving away from the preoccupation with developing universalizing technologies, we can also facilitate scenarios where listeners are granted the flexibility and agency to explore a variety of listening situations, based on their individual needs, desires, and sensitivities within a variety of real-world scenarios. Gordon: I wholeheartedly agree with this, but the generality with which this is stated here, to me, doesn't strongly connect this ethical imperative to the specific experiment or individual responses discussed in this article.

Hayes: Beyond what the experiment itself might do in its limited scope, my goal is to enrich possibilities for co-researchers and students by exposure to 'new' (potentially risky from an outcome-based academic perspective) approaches. Will clarify in the text.
While I have explored this extensively within artistic milieux (see for example Hayes & Loaiza 2022), the project at large discussed in this paper would hopefully expose researchers, staff, and students to unfamiliar pathways and approaches that might permeate future work. The goal is to offer enough motivation to inspire a push beyond the limiting factors of disciplinary and institutional conventions and requirements. In my own practice
developing bespoke and custom self-built instruments, for example, this has involved demonstrating specifically through live performance with electronics how highly expressive, compelling, and versatile musical activity can emerge even when resisting dominant HCI tropes such as those advocating for more sensing technology and computational complexity (Hayes & Marquez-Borbon 2021).

 

Starting from this techno-critical position, and based on the experiences addressed in the interviews, the second phase of this study involved a subset of participants listening to a variety of music[15] while using a prototype haptic wearable (see Figure 4)[16]. Software models, which were explained in plain terms to the participants, included three scenarios. Firstly, amplitude was mapped to intensity of vibration, which is one of the most common strategies used within audio-haptic design and is typically intuitively understood. The prototype system used three separate motors, and participants could select a single motor, or up to three motors for haptic feedback. In this way an exploration of spatial effects–rather than fixed point vibration–could be explored, as informed by the earlier comments from participants regarding non-symmetrical hearing. Moreover, the motors were attached to a wrist-worn cuff, but could be easily reconfigured elsewhere on the body using Velcro attachments if desired by participants (this was encouraged). Secondly, drawing on the interviewees’ remarks regarding their highly specific frequency profiles, a spectral filter was used to distribute frequencies across the three different ERMs; frequency bands were grouped together crudely using a graphical visual interface (and could again be adjusted throughout the experiment). By placing the motors in close proximity, participants were encouraged to explore the haptic illusion of vibrotactile apparent motion (VAM), where a sensation that actually crosses between motors can be felt (see Hayes & Rajko 2017 for more details). In the final profile, a measure of perceived brightness derived from the spectral centroid of the sound was used in order to investigate whether an aspect of music that is not typically recognized by non-specialists, nor commonly discussed within clinical studies, could be useful within the mapping design.

 

Participants were asked to complete a short survey about their experiences. This asked participants to describe their discernment between the different models, along with their experiences of each. Special focus was also given to suggestions on how the device might be developed in the future or used in a real-world scenario, based on the short experience of listening to music with it. While all participants noted that they had experienced something new, responses to finding musical listening more meaningful or more enjoyable were varied, although they tended towards the positive in both cases. Responses to the ways in which musical listening was shaped and mediatedboth for good or for illwere found to be highly individual and specific.

 

Having had the experience of different frequency regions sent to separate ERMs[17], several participants mentioned a desire to have the opportunity to use more sophisticated source separation: “The ability to ‘feel’ the guitar lead parts was exceptional… To separate the parts of the music, guitar, voice, etc. and highlight those parts through the vibrationwould be exceptional” (Participant 14); others requested “to have more pitch and frequency in the music to match” (Participant 19). Some participants commented on requiring more time to experiment, which echoes prior research into haptics where sensory training is typically required; while others noted that the experiments gave them insight into how they already employed vibratory awareness within their listening: “I found it interesting that, with my hearing loss, I was already utilizing and employing a form of haptics in my leisure enjoyment and pursuit of music to adjust to and improve my musical experience” (Participant 16).

 

The quality of the haptic sensation was also discussed. Participants were given background information about the characteristics and limitations of the ERMssuch as their ramp-on and -off times, which can lead to latency issuesas well as guidance on how factors such as the location placed on the body and the density and type of sensing nerves would affect what was felt (see Hayes & Rajko 2017 for further discussion). ERMs were used for ease of rapid prototyping and their low-cost, but as one participant noted: “the feeling is a bit electric shock (very low), rather than a more comfortable feeling” (Participant 15); another suggested that “a smoother vibration would feel more natural and less distracting” (Participant 17).

 

Numerous other vibrotactile technologies are available, which vary in terms of resolution, sensation, and cost. ERMs have historically been used for haptic alerts and so the affective and aesthetic responses to different technologies, while highly subjective (Hayes & Rajko 2017), can also be considered. Indeed, one respondent indicated that they experienced “almost like feeling emotion, but through vibrationa sensation, doubled, so music is more meaningful” (Participant 18). If any latency was detected by participants, particularly while listening to rhythmic material, this was found to be extremely distracting. Finally, resonating with what had originally drawn me towards haptic technology over a decade ago, many of the participants Gordon: I think it would be valuable to discuss the participants a bit more, of course without divulging any personal information But because the study was small (sixteen total people), and because the participants were solicited from very specific social milieus, it is important to contextualize their survey responses—especially if the Author then wants to make generalized claims based on these responses.

Hayes: Added some allowable details.
commented on experiencing more embodied relationships with their musical listening: “I have not felt music in my body like this before
it is much more defined than just sitting on top of a huge speaker at a bar” (Participant 18); “the haptic vibration brought awareness to a few of the lower, less intense bass tones that my hearing did not pick up” (Participant 16); “It felt more immersive, involving more senses, in a way that you only get otherwise in a live-music setting” (Participant 17).

 

Conclusion

 

This paper has discussed technologically-mediated listening from three applied interdisciplinary perspectives: within creative music practice, within a clinical audiology study, and within a related arts-informed experiment. The overarching project described here is transversal in nature, intersecting with a variety of communities who use technology in relation to musical listening. It furthers ecological approaches to perception (Gibson 1979) and listening (Clarke 2005) by evidencing the ways in which listening is multimodal, embodied, and not confined to cochlear hearing. This is done specifically by engaging with sound through touch and haptic technologies, developed firstly through my own phenomenological accounts of an extended musical practice, and finally via the qualitative responses elicited when facilitating listening scenarios with people who use assistive hearing technologies. This demonstrates that only by articulating, sharing, acknowledging, and attending to highly-specific and individual experiences of listening that often develop over lifetimes we might start to reveal the ways in which both clinical and HCI understandings of techno-musical listening can circumscribe the variety of ways in which listening itself can occur. I do not bring these themes together in order to conflate my own experiences of disembodiment when starting to explore digital sound with the experience of hearing loss. Rather, it is specifically my experience as a practitioner-researcherone who has spent years exploring the ways in which mediation can be a process of reconfiguration between my body and the technologies that I use to listenthat has enabled me to facilitate the various listening scenarios discussed.

 

An interdisciplinary praxis grounded in CPR can illuminate the strengths and weaknesses of different methodologies, and moreover, ask questions that can rupture disciplinary limitations. For example, from the ecological perspective, the sensory modalities are never considered in isolation nor from a static perspective (Gibson 1979), but rather work together and, moreover, in concert with the motor apparatus to produce the holistic action-perception system. Quantitative perceptual studies, on the other hand, commonly focus on isolated sensory modalities, and typically involve solitary stationary subjects in artificial clinical environments. Yet, musical practitioners working with sound, technology, and improvisation have a wealth of highly-attuned embodied knowledge related to the role of multi-sensory feedback within performance, the spatial dimensions of sound, the role of the environment in listening, in addition to the now decades-old body of scholarly knowledge that addresses sound beyond objective measurable parameters for identifying speech and music. Furthermore, while I have emphasized the importance of bringing forth individual experiences, needs, and desires throughout, it is largely within collaborative situations that these phenomena have emerged: via collective listening through materials in a workshop, or while exploring the haptic technology together in a very hands-on fashion with participants in the latter part of this project.

 

If, as Kassabian proposes, listening to the sounds and music that permeate our lives and living spaces, “modulates our attentional capacities,… tunes our affective relationships to categories of identity, [and] conditions our participation in fields of subjectivity” (Kassabian 2013, 18), it seems crucial, as per the ecological perspective, to engage much more fully with the capacities and possibilities that encompass the physiological, the material, and also the sociocultural within mediated listening. This was evidenced, for example, in the case where the participant reported that while they were able to hear specific soundsperhaps satisfying laboratory conditions for frequency range perceivedthese were neither enjoyable nor pleasurable to listen to. Working with machine listening algorithms is foundational to my own practice, as is reflecting on how the choice of such algorithms will affect sound analysis and signal processing. Building on this, future iterations of the haptic system’s software would take seriously the request from some of the participants for more sophisticated audio source separation. If able to be done near-instantaneously enough so as to allow for live musical performance scenarios, this could also facilitate exploration of audio-haptic listening that is uniquely configured based around specific and complex perceptions of not only rhythm, but also timbre and spectral content, as discussed in the interviews. 

 

In her work on deafness and its relationship to the emergence of media technologies, Mara Mills elucidates on how “the history of deaf communication makes clear that sound is always already multimodal” (Mills 2015, 52). Yet, as in the case of the participant who learnt of their already haptically-engaged musical listening via the experiment itself, a multimodal technological mediation of listening can afford the shifting of attention to unfamiliar perceptual processes. Furthermore, the variety of ways in which we can musically and sonically make sense of the world can be potentially enriched. As I reflect back on how this project will continue to shape my own musical practice, I am reminded that musical activity is fundamentally relational, and that to limit how listening is mediated, whether through siloing, neglect, or lack of imagination, will diminish the possibilities for all.

 

(More) Acknowledgements


Thanks to Michele Michaels and BC Brown for their conversations and insight on this project. Marije Baalman, Tobias Feltus, and Slater Olsen worked on earlier versions of the haptic technology. The project was generously funded by an Interdisciplinary Grant from the Herberger Institute of Design and the Arts, Arizona State University. I am grateful to the peer reviewers Ted Gordon and Bethany Younge for their generous and insightful comments on the text.

 

 

References

 

Abras, Chadia, Diane Maloney-Krichmar, and Jenny Preece. 2004. “User-centered design.” Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications 37, no. 4: 445-456.

 

Barry, Andrew, Georgina Born, and Gisa Weszkalnys. 2008. “Logics of interdisciplinarity.” Economy and Society 37, no. 1 (February): 20-49. https://doi.org/10.1080/03085140701760841

 

Ben-Ari, Eyal. 1987. “On acknowledgements in ethnographies.” Journal of Anthropological Research 43, no. 1 (April): 63-84. www.jstor.org/stable/3630467.

 

Blesser, Barry, and Salter, Linda-Ruth. 2009. Spaces speak, are you listening?: experiencing aural architecture. MIT press.

 

Braun, Virginia, and Clarke, Victoria. 2006. “Using thematic analysis in psychology.” Qualitative Research in Psychology 3, no. 2 (July): 77-101. https://doi.org/10.1191/1478088706qp063oa

 

Clarke, Eric. 2005. Ways of listening: An ecological approach to the perception of musical meaning. Oxford: Oxford University Press.

 

Drever, John Levack. 2019. “ ‘Primacy of the Ear’–But Whose Ear?: The case for auraldiversity in sonic arts practice and discourse.” Organised Sound 24, no. 1 (April): 85-95. Cambridge University Press. https://doi.org/10.1017/S1355771819000086

 

Drever, John L. 2017. “The case for auraldiversity in acoustic regulations and practice: The hand dryer noise story.” In: 24th International Congress on Sound and Vibration (ICSV24). Westminster, London, United Kingdom, 23-27. http://research.gold.ac.uk/id/eprint/20814

 

Gasson, Susan. 2003. “Human-centered vs. user-centered approaches to information system design.” Journal of Information Technology Theory and Application (JITTA) 5, no. 2 (July): 5. https://aisel.aisnet.org/jitta/vol5/iss2/5

 

Gaver, William W. 1991. “Technology affordances.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '91). Association for Computing Machinery, New York, NY, USA, 79-84. https://doi.org/10.1145/108844.108856

 

Gibson, James J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.

 

Gifford, René H., and Michael F. Dorman. 2019. “Bimodal hearing or bilateral cochlear implants? Ask the patient.” Ear and Hearing 40, no. 3 (May): 501.

 

Gunther, Eric, and Sile O’Modhrain. 2003. “Cutaneous grooves: Composing for the sense of touch.” Journal of New Music Research 32, no. 4 (December): 369-381.

 

Hagood, Mack. 2017, “Disability and Biomediation.” In Disability media studies, 311-329. New York: New York University Press.

 

Hayes, Lauren. 2015. “Enacting Musical Worlds: Common Approaches to using NIMEs within both Performance and Person-Centred Arts Practices.” In Proceedings of the International Conference on New Interfaces for Musical Expression: 299-302.

 

Hayes, Lauren. 2013. “Haptic augmentation of the hybrid piano.” Contemporary Music Review 32, no. 5 (October): 499-509.

 

Hayes, Lauren. 2011. “Vibrotactile Feedback-Assisted Performance.” In NIME, 72-75.

 

Hayes, Lauren, and Juan M. Loaiza. 2022. “Exploring Attention Through Technologically-Mediated Musical Improvisation: An Enactive-Ecological Perspective.” Access and Mediation: Transdisciplinary Perspectives on Attention 11 (February): 279.

 

Hayes, Lauren, and Adnan Marquez-Borbon. 2020. “Addressing NIME's Prevailing Sociotechnical, Political, and Epistemological Exigencies.” Computer Music Journal 44, no. 2-3 (July): 24-38.

 

Hayes, Lauren, and Jessica Rajko. 2017. “Towards an aesthetics of touch.” In Proceedings of the 4th International Conference on Movement Computing, 1-8.

 

Higgins, Lee. 2008. “The creative music workshop: Event, facilitation, gift.” International Journal of Music Education 26, no. 4 (November): 326-338.

 

Huang, Juan, Benjamin Sheffield, Payton Lin, and Fan-Gang Zeng. 2017. “Electro-tactile stimulation enhances cochlear implant speech recognition in noise.” Scientific reports 7, no. 1 (May): 1-5. https://doi.org/10.1038/s41598-017-02429-1

 

Kassabian, Anahid. 2013. Ubiquitous listening: Affect, attention, and distributed subjectivity. Berkeley: University of California Press, 2013.

 

Kytö, Meri. 2022. “Soundscapes of Code: Cochlear Implant as Soundscape Arranger.” In Aural Diversity, 73-81. London & New York: Routledge.

 

Limb, Charles J., and Alexis T. Roy. 2014. “Technological, biological, and acoustical constraints to music perception in cochlear implant users.” Hearing research 308 (February): 13-26. https://doi.org/10.1016/j.heares.2013.04.009.

 

Loaiza, Juan M. 2016. “Musicking, embodiment and participatory enaction of music: outline and key points.” Connection Science 28, no. 4 (October): 410-422.

 

Luo, Xin, Qian-Jie Fu, Hung-Pin Wu, and Chuan-Jen Hsu. 2009. “Concurrent-vowel and tone recognition by Mandarin-speaking cochlear implant users.” Hearing Research 256, no. 1-2 (October): 75-84. https://doi.org/10.1016/j.heares.2009.07.001.

 

Luo, Xin, and Lauren Hayes. 2019. “Vibrotactile stimulation based on the fundamental frequency can improve melodic contour identification of normal-hearing listeners with a 4-channel cochlear implant simulation.” Frontiers in neuroscience 13 (October): 1145.

 

McDermott, Hugh J. 2004. “Music perception with cochlear implants: a review.” Trends in amplification 8, no. 2 (March): 49-82. https://doi.org/10.1177/108471380400800203.

 

Menin, Damiano, and Andrea Schiavio. 2012. “Rethinking musical affordances.” Avant 3, no. 2 (October): 202-215. https://doaj.org/toc/2082-6710.

 

Mills, Mara. 2015. “Deafness.” In Davis Novak and Matt Sakakeeny (eds.), Keywords in Sound,  45-54. Durham: Duke University Press.

 

Norman, Donald A. 1988. The psychology of everyday things. New York: Basic books.

 

Parisi, David. 2018. Archaeologies of touch: Interfacing with haptics from electricity to computing. Minneapolis: University of Minnesota Press.

 

Paterson, Mark. 2007. The senses of touch: Haptics, affects and technologies. Oxford: Berg.

 

Rovan, Joseph, and Vincent Hayward. 2000. “Typology of tactile sounds and their synthesis in gesture-driven computer music performance.” Trends in gestural control of music (November): 297-320. Paris, France: IRCAM.

 

Small, Christopher. 1998. Musicking: The meanings of performing and listening. Hanover and London: Wesleyan University Press.

 

Straus, Joseph N. 2011. Extraordinary measures: Disability in music. New York: Oxford University Press.

 

Stuhl, Andy Kelleher. 2014. “Reactions to analog fetishism in sound-recording cultures.” The Velvet Light Trap 74 (September): 42-53. https://doi.org/10.7560/VLT7405.

 

Timmons, Wendy, and John Ravenscroft. 2019. “Using expressive movement and haptics to explore kinaesthetic empathy, aesthetic and physical literacy.” In The Routledge Handbook of Visual Impairment, 275-287. London: Routledge.

 

Thompson, Marie. 2017. Beyond unwanted sound: Noise, affect and aesthetic moralism. New York: Bloomsbury Publishing USA.

 

Wilson, Blake S., and Michael F. Dorman. 2008. “Cochlear implants: a remarkable past and a brilliant future.” Hearing research 242, no. 1-2 (August): 3-21. https://doi.org/10.1016/j.heares.2008.06.005.



School of Arts, Media and Engineering, Arizona State University, Tempe, AZ, USA

College of Health Solutions, Arizona State University, Tempe, AZ, USA

[1] It is these combinations of sonic domains that I refer to in describing my instruments as ‘hybrid’.

[2] Parisi’s project traces the structures of power that have historically shaped how we presently understand the sense and technologies of touch. In particular, he points to various recent industry marketing campaigns that promise novel experiences whilst they simultaneously and surreptitiously silo and render passive aspects of our sensory system (see Parisi 2018 for further discussion).

[3] In addition to working with vibration via motors as will be discussed below, I have given workshops for haptic listening using the simple but direct technique of placing loudspeaker cones inside and on objects to both sound and resonate them.

[4] https://www.arduino.cc/

[5] I use ‘more embodied’ to describe a threshold of meaningfulness as a performer, of feeling more connected to the technology, and so on. Of course, the notion of something being more or less embodied is a discussion that goes beyond the scope of this paper. Nevertheless, as I have written elsewhere (Hayes 2015), musicians who have developed their musical capacities only via electronic or digital instruments will likely not experience the type of disconnect or loss that I am describing.

[6] This is incommensurate with Gibson’s original meaning, from a purist perspective (see the earlier discussion on this).

[7] The various projects described in this paragraph are summarized in Hayes & Rajko 2017.

[8] https://bela.io/

[9] Workshops were conducted at Ableton’s LOOP Festival (Berlin, Germany), Moogfest (Durham, NC, USA), the International Computer Music Conference (Daegu, Korea), the Yorkshire Sound Women’s Network in conjunction with Electric Spring festival (Huddersfield, UK), the Tangible, Embedded and Embodied Interaction conference (Tempe, AZ, USA), and the Alliance for Women in Media Arts and Technology conference (Santa Barbara, CA, USA).

[10] It is important to note here that while Drever invokes the notion of vulnerability Younge: I would really encourage you to challenge his usage of "vulnerability" here. It seems quite ableist -- even if he calls for an aural inclusive approach. Many peoples with disabilities do not consider themselves vulnerable.

Hayes: I'm actually going to reword this to remove the word. He does in fact use it but it's actually in reference to categories determined by the WHO in 1999 with regards to noise exposure. He does indeed seem to have internalized the idea of being vulnerable ("over-60s/ older people" are included by default as vulnerable, for example).

Younge: It could just be a footnote too.
applied as a categorization of various groups, with which he seems to identify, this largely stems from a 1999 WHO document on community noise (see Drever 2017). Current online information related to deafness and hearing loss from the WHO does not use this terminology at all (see
https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss), perhaps in recognition that many d/Deaf and hard of hearing people do not consider themselves to be vulnerable at all.

[11] Speech perception is generally considered to be bimodal in the sense of being auditory and visual, given the importance of lip reading in the visual modality.

[12] Of course, as a sound artist, I am well aware of the limitations of considering music in terms of, for example, rhythm or melody. Nevertheless, it was important to enter into the collaborative relationship from the perspective of generosity, to first understand current trends and techniques within disciplinary fields that were not familiar to me.

[13] Of this six, half were pooled from the original interviews, and half were recruited subsequently.

[14] While the quantitative part of this project was specific to CI wearers, respondents to this qualitative study included people who were also considering getting fitted with CIs, as well as people who used traditional hearing aids and other non-surgically implanted hearing aids. Given that this work was more experimental and based on the experiences of those whose listening is mediated through assistive hearing technologies on a daily basis, we decided it was not necessary to exclude these respondents.

[15] Having worked with and workshopped audio-haptic scenarios for many years, I was cognizant that certain types of music would lend themselves to the scenarios presented. I brought a wide selection of music and also encouraged participants to contribute anything they wanted to listen to. Given that this was not a quantitative study, I was not concerned about biasing any sort of efficacy by offering my guidance on music choices. For instance, with direct amplitude-to-intensity mapping, rhythmic examples or music with a wide dynamic range is more effective than, for example, heavily compressed textural material, for obvious reasons.

[16] The hardware was the same as was used in a workshop scenario as shown in this image. The systemalthough not yet miniaturizedis fully embeddable and is not tethered to a desktop or laptop computer.

[17] The mappings from these spectral energy bands to the motors were pre-configured based on the sonic material, but could be adjusted in real-time by participants. In a future iteration, real-time analysis for source separation could be most effective here.