CON MOLTO SENTIMENTO:

On the evolutionary neuropsychology of music.

 Marsha Familaro Enright

Originally published in Objectivity, Volume 2, Number 3

 

 

 

           

                 Music is an art without an apparent object - there are no scenes to look at, no

 

sculptured marbles to touch, no stories to follow - and yet it can cause some of the most

 

passionate and intense feelings possible.  How does this happen - how can sounds from

 

resonant bodies produce emotion (1) in man?

 

     Music is experienced as if it had the power to

     reach man's emotions directly...Music communicates

     emotions, which one grasps, but does not actually feel;

     what one feels is a suggestion, a kind of distant,

     dissociated, depersonalized emotion -- until and unless

     it unites with one's own sense of life.  But since the

     music's emotional content is not communicated

     conceptually or evoked existentially, one does feel it

     in some peculiar, subterranean way...How can sounds

     reach man's emotions directly, in a manner that seems to

     by-pass his intellect?  What does a certain combination

     of sounds do to man's consciousness to make him identify

     it as gay or sad?...The nature of musical perception has

     not been discovered because the key to the secret of

     music is physiological -- it lies in the nature of the

     process by which man perceives sounds --and the answer

     would require the joint effort of a physiologist, a

     psychologist and a philosopher (an esthetician). ( Rand

     1971, 52-56)

 

 

 

     Further, what is the possible biological function and evolutionary origin of this

 

process by which sound elicits feeling?  As Ray Jackendorff says "there is no obvious

 

ecological pressure for the species to have a musical faculty, as there is for vision and

 

language" (1987, 211). In other words, there is no immediate and obvious biological

 

function for music, as there is for vision or language. One researcher in the psychology of 

 

music aptly summarized  the problem as follows:

 

      Musical messages seem to convey no biologically

      relevant information, as do speech, animal utterances

      and environmental sounds - yet people from all cultures

      do react to musical messages.  What in human evolution

      could have led to this?  Is there, or has there been, a

      survival value for the human race in music? (Roederer 1984, 351).

 

     One might object to this characterization with the question "But you are comparing

 

apples and oranges when you compare music to vision and language.  Instead, you should

 

be comparing hearing to vision, and music to painting; you should be asking: What is the

 

biological function of art?"

 

     I  first wondered about the biological function and evolutionary origin of music over

 

twenty years ago, while I was reading Ayn Rand's article on esthetics,

 

"Art and Cognition." In that article, Rand gives an answer to

 

the question "What is the biological function of art?" in

 

general, but is only able to suggest an hypothesis about

 

music's biological function.  The problem lies, as I

 

mentioned at the start of this article, with the fact that

 

music does not, apparently, involve the perception of

 

entities.  In the following, I shall attempt a fuller answer and thereby shed some light on

 

the question of how sounds from resonant bodies produce emotions in man.  My attempt

 

is made possible by recent scientific research into the nature of the brain. 

 

     Unlike many twentieth century theorists, Rand 's esthetics is integrated with her

 

complex and persuasive philosophy of reason, reality and

 

man's nature and I think her esthetics deserves special

 

attention as part of my examination of the nature of music.

 

I will examine some of the historical theories of musical

 

meaning, then the more recent scientific investigations into

 

the nature of music, including some of the current theories

 

of music's biological function.  I shall review some theories

 

of the nature of emotion and the relation of music to

 

emotion.  I shall then offer my theory of the biological

 

origin of music.  Subsequently, I shall consider Rand 's

 

hypothesis about the nature of music, in light of the

 

research evidence.  Lastly, I shall suggest some possible

 

research which might confirm or disconfirm my theory.

 

     I  have gathered evidence from several areas of the

 

research literature in search of an answer to the question of

 

music's evolutionary origin and biological function.  I

 

believe this evidence indicates that music evolved out of the

 

sonority and prosody (2) of vocal communication and that

 

musical elaboration of those elements has a special

 

biological communication function.  Prosody evidently

 

facilitates linguistic syntax  - that is, the sound of language helps us understand the

 

meaning of what’s said (Shapiro and Nagel 1995).

 

Furthermore, some aspects of one's pitch (3) perceptions in

 

music are evidently influenced by one's native language and

 

dialect (Deutsch 1992).

 

     More neuropsychological knowledge is needed to prove my

 

thesis - but I leave the reader to turning over the evidence

 

I have assembled, along with his own knowledge of music, in

 

considering the question:  Why does man make music?

 

 

 

 

 

               Brief History on the Theories of

                        Music's Nature

 

 

 

     From the ancient world to the nineteenth century, men

 

theorized about music based on their experience of it, and

 

only a little scientific knowledge about the physics of

 

music which was first examined by the Pythagoreans.  Two key

 

ideas have been repeated down through the ages:

 

     1.  Music is a form of communication, a kind of

 

         language; in particular, the language of feeling.

 

     2.  Music can form or inform one's feeling or

 

         disposition.

 

     The Ancient Greek "idea of music as essentially one with

 

the spoken word has reappeared in diverse forms throughout

 

the history of music" (Grout 1973,7).  The Greeks "were

 

familiar with the idea that music can alter the disposition

 

of those who hear it.  They acknowledge its power to soothe,

 

to console, to distract, to cheer, to excite, to inflame, to

 

madden" (West 1992, 31).  Aristotle believed that "music has

 

a power of forming the character, and should therefore be

 

introduced into the education of the young" (Politics 1340b,

 

10-15).  In one way or another, music touched everyone in

 

Greek civilization (West 1992).

 

     The Greeks seemed to implicitly acknowledge music's

 

connection to language in their refusal to create or accept

 

purely instrumental music.  The early Middle-Age Europeans

 

did likewise, but eventually divorced music from voice, so

 

that by Hegel's time, instrumental, wordless music was

 

considered a superior form (Bowie 1990, 183)

 

     A connection of music to language was mentioned

 

frequently in late nineteenth century examinations of music's

 

meaning.  There are many, including Schopenhauer, Hegel, and

 

Tolstoy, who subscribed to the idea that music is "another

 

language," the language of feeling.

 

 

     Hegel relates music to "primitive" expressions, such as

     bird-song or wordless cries.  Schleiermacher suggests

     the ambiguous status of music in relation to natural

     sound and to speech: "For neither the expression of a

     momentary sensation by a...speechless natural sound, nor

     speaking which approaches song are music, but only the

     transition to it" (Bowie 1990, 183).

 

     Langer (1957) points out that music fails to qualify as

 

a language because it does not have fixed denotation.

 

And Nietzsche, in an 1871 fragment, took issue with the view

 

that music represents feeling:

 

     What we call feelings are...already penetrated and

     saturated with conscious and unconscious representations

     and thus not directly the object of music, let alone

     able to produce music out of themselves (1980, 364,

     quoted in Bowie 1990, 230-31).

 

     Feelings, Nietzsche claims, are actually only symbols of

     music, which has a prior ontological status.  This

     opposes the commonplace in some Romantic thinking that

     music is the language, in the sense of the

     "representation", the substitute, for

     feeling...Nietzsche's view makes some sense if one

     ponders the fact that music can lead to the genesis of

     feelings which one had never had before hearing the

     music.  (Bowie 1990, 231).

 

     The modern scientific investigation of music began with

 

Hermann von Helmholtz's study of the physics and

 

psychological effects of the tones and keys of music (1954

 

[1885]).  Helmholtz argues that music does not use all types

 

of sound, only those "due to a rapid periodic motion of the

 

sonorous body; the sensation of a noise to non-periodic

 

motions." (Helmholtz 1863, 9).  Most researchers do not

 

question what sounds make music, but write with the

 

assumption that they are referring to sounds caused by

 

periodic vibrations (Aiello, Molfese, Sloboda, Stiller,

 

Lange, Schopenhauer, Trehub, Zatorre, etc.).  "Tonal

 

stimulation is a constant factor of all musical stimulus"

 

(Meyer 1994, 13).  The neurophysiological musical research

 

often revolves around contrasting responses of subjects to

 

periodic (tonal) versus nonperiodic (noise) sounds.   Warren ,

 

Obusek, and Farmer (1969) found the interesting fact that

 

subjects could not accurately perceive the temporal order of

 

four nonspeech, nonmusical sounds.

 

     John Sloboda (1985) has examined various contemporary

 

scientific theories of musical meaning, among them the idea

 

that music mimics environmental sounds.  The mimickry theory

 

is intriguing, but it seems to have a problem sufficiently

 

explaining the depth and range of meaning in music.  Indeed,

 

music can aptly imitate some natural sounds, as did Saint-

 

Saens, in his "Carnival of the Animals." But, even in music

 

considered to be as programmatic as Berlioz' "Symphonie

 

Fantastique," we cannot find environmental sounds of which

 

the music would be an imitation.  To this point, Helmholtz

 

noted that

 

    "In music one does not aim at representation of nature;

     rather, tones and tone sensations exist just for their

     own purpose and function independently of their

     relationship to any environmental object" (1863, 370).

 

     Other theorists suggest that music has its effects by

 

expressing tension and its resolution (Schenker 1935;

 

Bernstein 1976).  Tension and resolution are certainly a

 

large part of the musical experience, but they name only very

 

general qualities of it and do not seem to address the vast,

 

varied, and subtle ways music can make us feel.

 

      Manfred Clynes sees music as the embodiment of the forms of emotion, "emotionally

 

expressive dynamic forms which we have called essentic forms"

 

(1986, 169).  Clynes (1974, 1986) theory of music seems to parallel, for sound,

 

what Ekman proposed for facial expression. Ekman (1977) found that there is a

 

systematic relation between emotion and facial expression, and suggested that

 

this is a result of inborn "affect programmes" (automatically

 

triggered sequences of emotion), an idea also accepted by

 

by Tomkins (1962) and Izard (1971).  Clynes thinks the essentic forms are biologically

 

determined expressions of emotion, experienced the same way

 

across cultures, which idea seems similar to "inborn affect

 

programmes".

 

     Essentic forms are specific spatio-temporal forms

     biologically programmed into the central nervous system

     for the expressive communication and generation of

     emotional qualities (1986, 169).

 

Clynes seems to be using the word “form” metaphorically.  It

 

usually refers to the three-dimensional, spatial aspects of

 

things.  He seems to be saying that the physiological nature,

 

intensity, and timing of music-evoked emotions have great

 

similarity among individuals.  Just as, typically, one’s pulse raises, one’s muscles tighten

 

and one’s breath seems to become more ragged when one is angry, so there are typical

 

bodily changes due to the feelings which music evokes. This typicality is illustrated

 

and represented by the shape of the graph produced by

 

subjects' fingers during experiments with Clynes' sentograph.

 

The graph's shape thereby represents the "form" of the

 

emotion.  He has interesting data showing that the same music

 

will evoke similar motor responses in people of vastly

 

different cultures.  His sentograph, which measures motor

 

response, attaches to the subject's finger and records, on a

 

graph, subtle movements of the digit upon exposure to music.

 

Clynes found remarkable similarity among individual's

 

responses to a given composer and between the responses of

 

different individuals to the same composer's music, as

 

represented by the forms on the recording graphs.  De Vries'

 

research confirms Clynes' hypothesis that emotional responses

 

are similar among subjects and showed that responses to music

 

were "not affected by a subject's familiarity with or

 

evaluation of a piece" (De Vries 1991, 46).

 

     In a view which seems consonant with Clynes',

 

Jackendorff points out that dance is closely related to

 

music, and that

 

     going beyond crude rhythmic correspondences, we have

     undeniable and detailed intuitions concerning whether

     the character of dance movements suit or fail to suit

     the music.  Such intuitions are patently not the result

     of deliberate training...This suggests that...a

     cognitive structure can be placed into close

     correspondence with musical structure...[which] might

     encode dance movements...[which can be] provisionally

     called body representation -essentially a body-specific

     encoding of the internal sense of the states of the

     muscles, limbs, and joints.  Such a structure, in

     addition to representing the position of the body, would

     represent the dynamic forces present within the body,

     such as whether a position is being held in a state of

     relaxation or in a state of balanced tension....There is

     every reason to believe that such a representation is

     independently necessary for everyday tasks.  ...It would

     likely be involved as well in correspondences between

     emotional and muscular states -for instance, one carries

     oneself differently in states of

     joy, anger, depression, elation, or fear.  (1987, 238-9)

 

Consonant with this view, Hevner (1936) found that

 

individuals show general agreement about the emotional

 

content of pieces of music and that there is broad agreement

 

among members of a culture about the musical mood of a piece,

 

even among children as young as three years of age (Kastner

 

and Crowder 1990).  And Stiller notes that

 

     a number of important musical universals have been

     identified: Melodies worldwide are made mostly of major

     seconds; all musics employ dynamic accents, and notes of

     varying lengths; and all display extensive use of

     variation and repetition...the universality of music

     suggests that there may be a biological basis for its

     existence. (1987, 13)

 

     Research confirms the everyday experience that music

 

causes emotional states which can seriously affect our

 

actions.  Konecni (1982) found that subjects who had been

 

insulted by confederates working for the experimenter were

 

quite aggressive about shocking those confederates.  But

 

subjects who had merely been exposed to loud, complex music

 

were almost as aggressive about shocking confederates as the

 

insulted subjects had been!  In another experiment subjects

 

were able to shape their moods by their musical choices, and

 

thereby optimize their moods.  Depending on the way they felt

 

when they came to the experimental session (anxious or angry

 

or happy), and how they wanted to feel afterwards, they could

 

pick music that changed the way they felt entirely - once

 

again supporting the idea that the sounds of music have a

 

direct effect on emotions.

 

     In many respects, mood is a better concept than

 

emotion to describe the results of music.  Giomo says "This

 

affective meaning, labelled 'mood', is of an individual and

 

nameless nature, not truly describable using emotion labels"

 

(Giomo 1993, 143).  Sloboda points out that "the ability to

 

judge mood is logically and empirically separable from the

 

ability to feel emotion in response to music.  It is quite

 

possible to judge a piece of music to represent extreme

 

grief, yet be totally unmoved by it" (1991, 111).  DeVries

 

(1991) suggested that there are two steps in reacting to

 

music:  one in which music directly activates "programmes"

 

which trigger emotions and a second in which a person allows

 

themselves to experience the emotion or suppresses it,

 

depending on the congruity of the emotion with, among other

 

things, their personality and cultural background.

 

     In searching for an evolutionary origin to music,

 

Konecni, as does Roederer (1984), posits that music helps to

 

synchronize the emotional states necessary for collective

 

action, such as the excitement needed for the hunt or battle.

 

Many primitive tribes seem to use music in this way (as do

 

college bands during football games).  And, indeed, a few

 

other species, such as birds and cetaceans, have music-

 

like behaviors (4), wherein they produce sounds of periodic

 

vibrations and which are intimately tied to intra-species

 

communication and collective action.  Stiller claims that

 

"Music helps to insure...cooperation -- indeed, must

 

play an important role in that regard, or there would have

 

been no need to evolve such a unique form of emotional

 

communication" (1987, 14).  He quotes Alan Lomax to the

 

effect that music organizes the mood, the feelings, the

 

general attitude of a group of people.  This seems to echo

 

the Ancient Greek view that music teaches men how to feel

 

like warriors or like lovers.

 

     Granted,

 

     ...there may be a certain cultural advantage in having

     some rudimentary form of music to help synchronize

     collective rhythmic activity or to serve some ceremonial

     aspect of social life, no particular reason is evident

     for the efflorescence of musical complexity that appears

     in so many cultures (Jackendorff 1987, 214).

 

     The socio-biological theory of musical meaning may

 

explain some of the psychological roots of music’s evolutionary origins but what

 

determines the kinds of sounds which can cause the experience

 

of emotion, i.e. the neurological roots?  And why do we have so many kinds of music

 

which we listen to for its own sake?

 

 

 

      The Neuropsychological Data on Language and Music

 

 

 

     Why should certain kinds of sounds be able to directly

 

evoke feeling?  By what means, what neuropsychological

 

processes?

 

     As have so many in the history of music theory, Roederer

 

(1984) wonders whether the answer lies in the unique human

 

capacity for language.  Human infants have high motivation to

 

acquire language, as evidenced by the assiduous way they

 

attend to, imitate, and practice language.  Language

 

activities are very pleasurable; if they were not, human

 

infants would not be motivated to perform language-related

 

activities as much as they do.  On this evidence, I venture

 

to say that humans have built-in developmental pleasure/pain

 

processes for producing and listening to language.  Language

 

acquisition is a cognitive activity that is highly motivated

 

and important to survival.  Are the emotions aroused for

 

language acquisition the evolutionary link between sound and

 

emotion?  That is, are humans moved by sound as a result of a biological need to be

 

interested in acquiring language?

 

     Experiments show that there are strong similarities in

     the way in which people perceive structure in music and

     in language...[but] overall, the syntax of music has

     much more latitude than that of language.  Thus, in

     the syntaxes of music and language, we must remember

     that music is far more flexible and ambiguous than

     language (Aiello 1994, 46-9).

 

     Furthermore, neuropsychological evidence seems to be a

 

odds with the proposal that language is the basis of music.

 

The areas of the brain which primarily process speech are,

 

apparently, mostly different from those which process music

 

(5).  Investigations into the brain areas which process

 

speech and music have turned up the interesting finding that,

 

in most infants, the left hemisphere responds more to speech

 

sounds and the right to musical tones, as indicated by a type

 

of EEG called auditory evoked potentials, (Molfese 1977).

 

Measures of how much attention a neonate paid to left or

 

right ear stimuli (as indicated by "high amplitude non-

 

nutritive sucking") indicated that most infants responded

 

more to language sounds presented to their right ears (left

 

hemispheres) and to musical sounds presented to their left

 

ears (right hemispheres) (Entus 1977; Glanville, Best, and

 

Levenson 1977), although Vargha-Khadem and Corbellis (1979)

 

were not able to replicate Entus' findings.  Best, Hoffman,

 

and Glanville (1982) found a right ear advantage for speech

 

in infants older than two months during tasks in which

 

infants had to remember and discriminate phonetic sounds and

 

musical timbres.  Infants younger than two months showed an

 

ear advantage only for musical notes, and that advantage was

 

for the left ear.  In older children and adult non-musicians,

 

damage to the left hemisphere usually impairs language

 

functions but tends to spare musical abilities, including

 

singing.  Damage to the right hemisphere, particularly the

 

right temporal lobe, tends to leave language functions

 

intact, but impairs musical abilities and the production and

 

comprehension of language tone and of emotion expressed

 

through language or other sounds (Joanette, Goulet, and

 

Hannequin 1990).

 

     Zatorre (1979) found a left ear advantage for the

 

discrimination of melodies versus speech in a dichotic (6)

 

listening task with both musicians and nonmusicians.  He

 

found cerebral-blood-flow evidence that right temporal lobe

 

neurons are particularly important in melodic and pitch

 

discriminations (Zatorre, Evans, and Meyer 1994).  Tramo and

 

Bharucha (1991), following the work of Gordon (1970), found

 

that the right hemisphere seems to process the perception of

 

harmonics (tested by the detection of complex relationships

 

among simultaneous musical sounds).  Damage to the right

 

temporal lobe impairs the ability to recognize timbre (7),

 

and time cues within tones that determine the recognition of

 

timbre (Samson and Zatorre 1993).  These authors suggest that

 

"the same acoustical cues involved in perception of musical

 

timbre may also serve as linguistic cues under certain

 

circumstances" (Ibid., 239).  There are now indications that

 

timbre and phonetic information are processed through some

 

common stage beyond peripheral acoustic processing.  Research

 

is underway to determine whether voice identification also

 

proceeds through this same timbre-phoneme nonperipheral stage

 

(Pitt 1995).

 

     In a critical review, Zatorre (1984) notes that right-

 

sided damage can produce deficits in tasks that process

 

patterns of pitch and timbre differences.  Adults with

 

partial or complete excisions of the right temporal lobe were

 

found to be significantly impaired in the perception of pitch

 

(Zatorre 1988).  Kester et. al (1991) found that musical

 

processing was most affected by right temporal lobectomy.  In

 

a review of the literature on the infant's perception of tone

 

sequences, or melodies, Trehub (1990) found that human

 

infants do not use local pitch strategies characteristic of

 

nonhuman species, that is, they do not depend on the

 

recognition of particular, or absolute pitches, to identify

 

tone sequences.  Rather, like human adults, they use global

 

and relational means to encode and retain contours of

 

melodies, with little attention to absolute pitch. (Although,

 

interestingly, Kessen, Leving and Wendrich (1979) found that

 

infants paid very close attention to experimenters' singing

 

and could imitate pitch quite well.)  In other words, human

 

infants have the ability to recognize exact pitches, but the

 

exact key in which a melody is played makes little difference

 

for human recognition of melody, while animals depend on the

 

particular pitch in which their "song" is sung to recognize

 

it.  This seems to imply that even human infants are

 

extracting the abstract pattern of the sounds, rather than

 

using the sounds as signs, specific perceptual markers, of

 

events.

 

     In reviewing the research on infants' perception of

 

music, Trehub (1987) suggests that infants have the skills

 

for analyzing complex auditory stimuli.  These skills may

 

correspond to musical universals, as indicated by infants'

 

preference for major triadic chord structures.

 

     The evidence indicates that human infants have the

 

ability to recognize and process music in a fairly complex

 

way, at a very early age.  Furthermore, music processing in

 

most infants and adults seems to occur primarily in the right

 

hemisphere (8).

 

     And infants, like adults, appear to find music

 

interesting: they tend to pay attention to it, they like to

 

engage in imitations of adult pitches and, they learn to sing

 

as soon as they learn to speak (Cook 1994).

 

 

 

 

 

           The Neuropsychological Data on Emotions

 

 

 

     How does the data on the neuropsychological processes

 

involved in music relate to the data on the

 

neuropsychological processes involved in emotions?  It is

 

well-established that for most people, right hemisphere

 

damage causes difficulties with the communication and

 

comprehension of emotion (Bear 1983; Ross 1984).  Apparently,

 

the right hemisphere mediates the processing of many types of

 

emotionally-laden information: visual, facial, gestural,

 

bodily, and auditory.

 

     The evidence suggests that the right hemisphere has a

 

special relationship with the emotional functions of the

 

human mind, specifically in being able to process and project

 

emotional meaning through perceptual information (Kolb and

 

Whishaw 1990).  For most people, the right hemisphere

 

performs integrative visual functions, such as grasping

 

visual gestalts and comprehending visual and architectural

 

wholes; the inability to recognize faces is sometimes the

 

consequence of right temporal lobe damage.  (Kolb and

 

Whishaw, 1990) Right hemisphere damage can often lead to the

 

inability to be aware of whole areas of space in relation to

 

oneself, called perceptual neglect.  (See A.  Luria's The Man

 

With A Shattered World for an agonizing description of what

 

the world seems like when one's brain cannot perform these

 

visual and kinesthetic integrations.) Neglect of half of

 

perceived space, called hemi-neglect, is a frequent result of

 

extensive right parietal damage.  The right hemisphere is

 

fundamentally involved in comprehending the connotative

 

meanings of language, metaphors and nonliteral implications

 

of stories; and the right hemisphere seems to be involved in

 

the comprehension of meaning commmunicated through sound,

 

especially voice.  Oliver Sacks discusses patients with

 

"tonal agnosia,"

 

     For such patients, typically, the expressive qualities

     of voices disappear - their tone, their timbre, their

     feeling, their entire character - while words (and

     grammatical constructions) are perfectly understood.

     Such tonal agnosias (or 'aprosodias') are associated

     with disorders of the right temporal lobe, whereas

     aphasias go with disorders of the left temporal lobe

     (1987, 83).

 

He also describes aphasics (9) who are not able to grasp the

 

denotative meaning of words and yet are able to follow many

 

conversations by the emotional tone of the speakers.

 

     With the most sensitive patients, it was only with

     [grossly artificial mechanical speech from a

     computerised voice synthesizer] that one could be wholly

     sure of their aphasia (Ibid., 80-1).

 

The patients would use all kinds of extraverbal clues to

 

understand what another was saying to them.  He claimed that

 

a roomful of them laughed uproariously over a speech given by

 

Ronald Reagan because of the patent insincerity of it.

 

      Rate, amplitude, pitch, inflection, timbre, melody, and

 

stress contours of the voice are means by which emotion is

 

communicated (in nonhuman as well as human species), and the

 

right hemisphere is superior in the interpretation of these

 

features of voice (Joseph 1988).  Samson and Zatorre (1993)

 

found similar cortical areas responding to pitch and timbre

 

in humans and animals.  In dichotic listening tasks, Zurif

 

and Mendelsohn (1972) found a right ear advantage for

 

correctly matching meaningless, syntactically organized

 

sentences with meaningful ones by the way the sentence was

 

emotionally intoned.  The subjects could apparently match

 

such nonsense sentences as:  "Dey ovya ta ransch?" with "How

 

do you do?" by the intonation the speaker gave the sentence.

 

Heilman, Scholes, and Watson (1975) found that subjects with

 

right temporal-parietal lesions tended to be impaired at

 

judging the mood of a speaker.  Heilman et. al (1984) also

 

compared subjects with right temporal lobe-damage to both

 

normals and aphasics (4) in discriminating the emotional

 

content of speech.  He presented all three types of subjects

 

with sentences wherein the verbal content of the speakers was

 

filtered out and only the emotional tone was left, and found

 

those with temporal lobe damage to be impaired in their

 

emotional discriminations.  In a similar study, Tompkins and

 

Flowers (1985) found that the tonal memory scores (how well

 

the subjects could remember specific tones) for right

 

braindamaged subjects were lower than those of other

 

subjects, implying that right braindamage leads to a problem

 

with the perceptual encoding of sound, put not necessarily

 

with the comprehension of emotional meaning per se.

 

     The human voice conveys varied, complex, and subtle

 

meaning through timbre, pitch, stress contour, tempo, and so

 

forth and thereby communicates emotion.

 

     What is clear is that the rhythmic and the musical are

     not contingent additions to language....The "musical"

     aspect of language emphasizes the way that all

     communication has an irreducibly particular aspect which

     cannot be substracted (Bowie 1990, 174).

 

 

Best, Hoffman, and Glanville found that the ability to

 

process timbre appears in neonates and very young infants,

 

apparently before the ability to process phonetic stimuli

 

1982).

 

     Through the "music" in voice, we comprehend the feelings

 

of others and we communicate ours to them.  This is an

 

important ability for the well-being of the human infant, who

 

has not yet developed other human tools for communicating its

 

needs and comprehending the world around it - a world in

 

which the actions and feelings of its caretakers are of

 

immense importance to its survival.  Emotion is conveyed

 

through language in at least two ways: through the

 

specifically verbal content of what is said, and through the

 

"musical" elements in voice, which are processed by the right

 

hemisphere.  One of the characteristic features of

 

traditional poetry is the dense combination of the meaning of

 

words with the way they sound, which, when done well, results

 

in emotionally moving artworks (Enright 1989).  Mothers

 

throughout the world use nursery rhymes, a type of poetry, to

 

amuse and soothe infants and young children, that is, to

 

arouse emotions they find desirable in the children.  "Music

 

can articulate the 'unsayable', which is not representable by

 

concepts or verbal language" (Bowie, 1990, 184).  “Men have not found the words for it

 

nor the deed nor the thought, but they  have found the music” (Rand 1943, 544) .

 

     Was nature being functionally logical and parsimonious

 

to combine, in the right hemisphere, those functions which

 

communicate emotion with those that comprehend emotion?

 

     As social animals, humans have many ways of

 

communicating and comprehending emotions: facial expression,

 

gesture, body language, and voice tone.  I propose that

 

music's biopsychological origins lie in the ability to

 

recognize and respond directly to the feelings of another

 

through tone of voice, an important ability for infant and

 

adult survival. (The tone of voice of an angry and menacing

 

person has a very different implication than that of a sweet

 

and kind person.)

 

     If inflection and nuance enhance the effect of spoken

     language, in music they create the meaning of the notes.

     Unlike words, notes and rests do not point to ideas

     beyond themselves; their meaning lies precisely in the

     quality of the sounds and silences, so that the exact

     renderings of the notes, the nuances, the inflection,

     the intensity and energy with which notes are performed

     become their musical meaning.

     (J. M. Lewers, quoted in Aiello 1994, 55)

 

     Furthermore, I propose that the sound literally triggers

 

those physiological processes which cause the corresponding

 

emotion "action programmes," "essentic forms," or whatever

 

one wishes to call these processes.  This would explain the

 

uniquely automatic quality in our response to music.

 

     I am proposing that the biopsychological basis of the

 

ability of sound to cause emotions in man originates in man's

 

ability to emotionally respond to the sounds of another's

 

voice.  Theoretically, this ability lies in the potential for

 

certain kinds of sounds to set off a series of neurological

 

processes resulting in emotions, which events are similar to

 

those occurring during the usual production of emotions.

 

As so many in the history of musical theory have conjectured,

 

music does result from language - but not language's abstract,

 

denotative qualities.

 

     However, I should posit that it is not the ontogeny of

 

language per se that caused the development of music in

 

humans.  Many nonhuman animals communicate emotion and

 

subsequently direct and orchestrate actions of their species

 

through voice tone, and there is considerable evidence that

 

humans do likewise, which argues that this ability arose

 

before the emergence of language. 

 

Returning to my earlier

 

discussion of motivation in the infant acquisition of

 

language, it seems more likely that the pleasures and

 

emotions communicated through voice (which motivate the

 

acquisition of language) are another biological application

 

of the ability of voice tone to emotionally affect us, rather

 

than an initial cause of emotion in voice.  Human's were

 

already set to be affected by voice tone when we acquired the

 

ability to speak.  Pleasure associated with vocalizing likely

 

developed into pleasure in language acquisition.

 

     However, music, especially modern Western music, has

 

gone far beyond the kinds of auditory perceptions and

 

responses involved in simple tone of voice alone.  The

 

ability to emotionally recognize and respond to tone of voice

 

was developed early on in the evolution of Homo sapiens, as

 

evidenced by the same ability in our closest animal

 

relatives, the great apes.  The history of music seems to

 

show that humans greatly expanded on the use of voice tone

 

through their ability to abstract.  It appears that men

 

created instruments, learned how to distill and extract the

 

essence of tones and their relationships, rearranged and

 

expanded the range, timbre, and rhythm of sounds used both by

 

voice and by instruments, and thereby created a new, artistic

 

means of expressing a huge range of emotions.

 

     The evidence found by Clynes and others indicates that

 

there is a special pattern of sound for each emotion or mood,

 

which pattern humans are able to recognize in various voices,

 

both human and instrumental.  Helmholtz noted that the major

 

keys are

 

 

     well suited for all frames of mind which are completely

     formed and clearly understood, for strong resolve, and

     for soft and gentle or even for sorrowing feelings, when

     the sorrow has passed into the condition of dreamy and

     yielding regret.  But it is quite unsuited for

     indistinct, obscure, unformed frames of mind, or for the

     expressing of the dismal, the dreary, the enigmatic, the

     mysterious, the rude...[and it is] precisely for these

     ...[that] we require the minor mode (1954 [1885], 302)

 

The implication of the evidence is that humans have learned

 

how to abstract the sound pattern evoking, for example

 

triumph, and then re-present this pattern in its

 

essential form in a musical composition, giving the listener

 

an experience of the emotion of triumph rarely possible in

 

life.  Through abstraction, the emotion-provoking sounds have

 

been rendered essential and rearranged into new patterns and

 

combinations, thereby enabling humans to have an emotion-

 

evoking artistic experience far greater than that possible

 

from the sounds of the spoken voice alone.  Many theories of

 

music, to some extent, recognize that music makers take the

 

fundamental qualities of music and rearrange them to invent

 

new ways of feeling - see any number of essays in Philip

 

Alperson's book What is Music?

 

     In relation to this theory, it is noteworthy that only

 

the sounds of periodic vibrations can be integrated so as to

 

evoke emotion because the voice produces periodic vibrations

 

in its normal operation.  (Despite the best efforts of modern

 

musical theorists, all else is experienced as meaningless

 

noise.)  In the history of music theory, thinkers have placed

 

most of their emphasis on the relations and perceptions of

 

harmonies (Grout 1973; Lang 1941).  My proposal for the

 

biological basis of music concerns a system generally without

 

harmony - the human voice (there are some harmonic overtones

 

in any voice or instrument).  How do these factors relate to

 

one another?  Historically, music began as plainsong without

 

accompaniment and as simple melodies.

 

     The fact that music could achieve simultaneity, that it

     could have vertical as well as horizontal events, was a

     revolutionary discovery....Now music had a new kind of

     interest, the accidental or contrived vertical

     combination of two or more pitches" (Aiello 1994, 44)

 

Although polyphony (10) was created some time during the

 

Middle Ages, apparently the conscious use of harmonic chords

 

was developed even later.

 

     Helmholtz mentions that

 

     A favourite assertion that "melody is resolved harmony,"

     on which musicians do not hesitate to form musical

     systems without staying to inquire how harmonies had

     either never been heard, or were, after hearing,

     repudiated.  According to our explanation, at least, the

     same physical peculiarities in the composition of

     musical tones, which determined consonances for tones

     struck simultaneously, would also determine melodic

     relations for tones struck in sucession.  The former

     then would not be the reason for the latter, as the

     above phrase suggests, but both would have a common

     cause in the natural formation of musical tones (1954

     [1885], 289).

 

In other words, harmony and melody complement each other,

 

using the same mathematical relationships of tones and their

 

perception.  Harmony does this simultaneously, melody does

 

this over time.  However, harmony is not an equal partner in the creation of music,

 

because we can make music without harmony and because harmony does not make

 

music on its own:  music requires a sequence of sounds and silences through

 

time.  Harmony developed as man abstracted musical

 

qualities in sound, rearranged them, and used them

 

simultaneously.  It is likely that theoreticians have focused

 

on harmony in their analysis of music because complex

 

harmonies are a major part of modern western music and

 

because melodies are more difficult to analyze due to the the

 

element of time.  Given the historical development of music,

 

I believe the emphasis on harmony is an artifact of human

 

analytical ability.  Moreover, an harmonic chord on its own

 

is not music - it is always necessary to have a sequence of

 

tones to have music.

 

 

 

 

            Beyond Neuropsychology to Music as Art

 

     I have posited a biological/evolutionary origin  to music, but I have not, as yet,

 

proposed a survival function for it.  Before I do that, I would like to address the wider

 

issue of the biological function of art  per se.  In her article "Art and Cognition," Rand

 

(1971) presented her theory on the cognitive foundations of art.

 

This theory is of particular interest to me, not only because

 

it is founded on and well-integrated with her revolutionary

 

philosophy of Objectivism, but because it is specifically

 

based on man's cognitive and motivational nature, on what she

 

called his "psycho-epistemological needs" (11), and thereby posits gives an answer to the

 

question of art’s biological roots.  Her hypothesis in no way addresses or accounts for my

 

original question, What is the evolutionary basis of the ability to respond to sound?  With

 

her hypothesis, the question remains unanswered.  But her theory

 

is worth addressing because she asked and attempted to answer

 

many of the fundamental questions about music's nature.

 

     Rand argued that art is a means of making

 

conceptual yet concrete the information of the senses, which,

 

thereby, makes that information more meaningful to us.

 

      The visual arts do not deal with the sensory field of

      awareness as such, but with the sensory field as

      perceived by a conceptual consciousness.

 

      The sensory-perceptual awareness of an adult does not

      consist of mere sense data (as it did in his infancy),

      but of automatized integrations that combine sense data

      with a vast context of conceptual knowledge.  The

      visual arts refine and direct the sensory elements of

      these integrations.  By means of selectivity, of

      emphasis and omission, these arts lead man's sight to

      the conceptual context intended by the artist.  They

      teach man to see more precisely and to find deeper

      meaning in the field of vision.  (Rand 1971, 47)

 

 

Painting makes conceptual the sense of sight, sculpture the

 

sense of sight and touch, dance the sense of body motion, or

 

kinesthesia, and music the sense of hearing.

 

     But Rand argued that music does not follow exactly the

 

same psycho-epistemological process as the other arts.

 

According to Rand , the art of music embodies man's sense of

 

life by abstracting how man uses his mind.

 

 

      The other arts create a physical object,...and the

      psycho-epistemological process goes from the perception

      of the object to the conceptual grasp of its meaning,

      to an appraisal in terms of one's basic values, to a

      consequent emotion.  The pattern is:  from perception -

      to conceptual understanding - to appraisal - to

      emotion.

 

      The pattern of the process involved in music is: from

      perception - to emotion - to appraisal - to conceptual

      understanding.

 

      Music is experienced as if it had the power to reach

      man's emotions directly (Rand 1971, 50)

 

In other words, upon listening to music, it can cause us to

 

 experience feelings which we subsequently appraise.  Whether

 

we like or dislike the feelings caused by the music (or have

 

some complex reaction to it), helps determine what kinds of

 

music we individually favor.  An interesting facet of the

 

musical experience is the fact that many unrelated images

 

tend to come to mind when we listen to music, imagery which

 

seems to correspond to the emotions.  It is as if our minds

 

find it illogical to have feelings with no existential

 

objects to evoke them, so our minds provide images of an

 

appropriate nature.  This process seems reminiscent of others, such as the way in which

 

we “see” faces in myriad visual images, or think we hear voices in the sound of the wind. 

 

The common thread between them is the mind’s automatic attempt to make sense of the

 

world, both external and internal.

 

     According to Rand , how might sound evoke these emotions?

 

     If man experiences an emotion without existential

     object, its only other possible object is the state or

     actions of his own consciousness.  What is the mental

     action involved in the perception of music?  (I am not

     referring to the emotional reaction, which is the

     consequence, but to the process of perception.)...The

     automatic processes of sensory integration are completed

     in his infancy and closed to an adult.

 

     The single exception is in the field of sounds produced

     by periodic vibrations, i.e., music...musical tones

     heard in a certain kind of succession produce a

     different result -the human ear and brain integrate them

     into a new cognitive experience, into what may be called

     an auditory entity; a melody.  The integration is a

     physiological process; it is performed unconsciously and

     automatically.  Man is aware of the process only by

     means of its results.

 

     Helmholtz has demonstrated that the essence of musical

     perception is mathematical; the consonance or dissonance

     of harmonies depends on the ratios of the frequencies of

     their tones...[There is] the possibility that the same

     principles apply to the process of hearing and

     integrating a succession of musical tones, i.e., a

     melody -- and that the psycho-epistemological meaning of

     a given composition lies in the kind of work it demands

     of a listener's  ear and brain (Rand 1971, 57-8)

 

 

     Music gives man's consciousness the same experience as

     the other arts:  a concretization of his sense of life.

     But the abstraction being concretized is primarily

     epistemological, rather than metaphysical; the

     abstraction is man's consciousness, i.e., his method of

     cognitive functioning, which he experiences in the

     concrete form of hearing a specific piece of music.  A

     man's acceptance or rejection of that music depends on

     whether it calls upon or clashes with, confirms or

     contradicts, his mind's way of working.  The

     metaphysical aspect of the experience is the sense of a

     world which he is able to grasp, to which his mind's

     working is appropriate....A man who has an active

     mind...will feel a mixture of boredom and resentment

     when he hears a series of random bits with which his

     mind can do nothing.  He will feel anger, revulsion and

     rebellion against the process of hearing jumbled musical

     sounds; he will experience it as an attempt to destroy

     the integrating capacity of his mind." (Rand 1971, 58)

     1971)

 

In other words, she proposed that the arrangement of sounds

 

in music causes one's brain to perform a sensory/perceptual

 

integration similar to those performed during the solution of

 

an existential problem, and that one emotionally reacts to

 

the kind of cognitive work which the music makes one perform

 

through the integration.

 

     In line with the assumptions of musical research, she

 

notes that only sounds caused by periodic vibrations can be

 

integrated by the human brain.  We can analyze the sounds of

 

music as follows: simultaneous sounds into harmonies,

 

successions of sounds into melodies, or what Rand called

 

"auditory entities" and percussions into rhythms.

 

     According to Rand 's hypothesis, musical sounds are

 

physiologically integrated by the brain and our emotions are

 

in response to the type of integration performed.  She

 

proposed that the musical integration parallels perceptual

 

integration in nonmusical cognitive activities, and that we

 

respond emotionally to the type of integrating work music

 

causes us to perform.  Her hypothesis assumes no direct

 

physiological induction of emotion, but proposes that the

 

emotion is a response to the kind of cognitive work caused by

 

the integration of the sounds.

 

     Is this view consonant with the scientific facts?

 

Rand 's hypothesis supposes that a perceptual integration

 

results in emotions such as joy, delight, triumph, which are

 

normally generated in humans by a complex conceptual

 

cognitive activity.  I am not aware of any other purely

 

perceptual integrations in other sense modalities which

 

result in such emotions (although there may be some visual

 

stimuli, such as a beautiful sunset or graceful human

 

proportions, for which we have in-built pleasurable

 

responses).  In this respect, sound seems to be unique.

 

     Idiot-savants and some individuals with IQ's in the

 

teens, respond fully to music, as well as

 

     A man whom childhood meningitis had left mentally

     retarded as well as behaviorally and emotionally

     crippled, but who...was so familiar with... all the Bach

     cantatas, as well as a staggering amount of other

     music)...evincing a full understanding and appreciation

     of these highly intellectual scores.  Clearly, whatever

     had happened to the rest of his brain, his musical

     intelligence remained a separate - and

     unimpaired - function (Stiller 1987, 13).

 

Under Rand 's theory, is this possible?  Such cognitively

 

impaired individuals would not normally perform many complex

 

conceptual mental integrations, nor experience the feelings

 

accompanying those integrations.  One might infer that these

 

mental cripples, unable to self-generate cognitive activities

 

which would allow them the pleasures of deep feelings, are

 

enabled the life-giving experience of such feelings through

 

music (hence, some of them completely devote themselves to

 

music).  That is, their cognitions are not complex enought to produce many profound and

 

pleasurable feelings on their own, but they are able to pleasurably shape their emotional

 

world with music.  Presumably, if their perceptual abilities are

 

intact, their brains could still perform the integrations

 

necessary under Rand 's hypothesis.  But how could their

 

psycho-epistemological sense of life respond to the

 

activities, in that they are not capable of much in the way

 

of conceptual activity?

 

     However, consider the following:

 

     If a given process of musical integration taking place

     in a man's brain resembles the cognitive processes that

     produce and/or accompany a certain emotional state, he

     will recognize it, in effect, physiologically, then

     intellectually.  Whether he will accept that particular

     emotional state, and experience it fully, depends on his

     sense-of-life evaluation of its significance." ( Rand

     1971, 61)

 

Here, she seemed to say that the processing and integrating

 

of the sounds are very similar to the physiological processes

 

involved in the existential evocations of emotions. In other

 

words, her statement seems to imply that she thinks the music

 

physiologically induces the emotion, which is subsequently

 

evaluated and accepted or rejected.

 

     It seems to me that Rand was not perfectly clear as to

 

the exact nature of music's production of emotions.  On the

 

one hand, she seemed to say that the emotions are a reaction

 

to the kind of cognitive work the music causes us to perform.

 

On the other hand, she seemed to say that the music

 

physiologically induces the emotion.

 

     Parsimony inclines me to take this analysis one step

 

further and propose that musical sounds induce the

 

neurological processes that cause the emotions; then we react

 

to the feeling of those emotions.  Instead of proposing, like

 

Rand , that the essence of music is epistemological - we react

 

to the kind of cognitive work music causes - I would like to

 

maintain that the essence is metaphysical, like the other

 

arts -  we react to the way the music makes us feel.  That

 

is, by neurologically inducing emotions, music shapes our

 

feelings about the world.  If painting is the concretization

 

of sight, music is the concretization of feeling.

 

     Rand recognizes this to some extent, "How can sounds

 

reach man's emotions directly in a manner that seems to by-

 

pass his intellect?" (1971, 54)  This question seems to imply

 

that she thinks the musical sensory integration affects

 

feelings directly.

 

     It is relevant to the issue that there are direct

 

sensory projections from the ear to the amygdala, a nuclei of

 

cells at the base of the temporal lobe (where so much music

 

processing seems to occur).  The amygdala is part of the

 

limbic system, considered essential to the production and

 

processing of emotion.  Although part of the temporal lobe,

 

the amygdala is not considered to be part of the cortical

 

sensory analysis systems that process the objective

 

properties of an experience.  Instead the amygdala is

 

believed to process our feeling or subjective sense of an

 

experience (Kolb and Whishaw 1990) - that is, how we feel

 

about an experience, such as the warm cozy feelings  we might

 

get at the smell of turkey and apple pie.  It seems possible

 

that the sounds of music could be directly processed by the

 

amygdala, resulting directly in emotion, without going

 

through the usual "objective-properties" processing of the

 

other cortical areas.  This might be how they "reach man's

 

emotions directly in a manner that seems to by-pass his

 

intellect?" ( Rand 1971,)

 

     However, we might find a resolution to the seeming

 

duality of Rand 's musical hypothesis by further reflecting on

 

music's nature.  I believe the key lies in the complexity of

 

music.  There are large elements of cognitive understanding

 

and processing involved in more complex music, e.g., there is

 

a definite process involved in learning to listen to

 

classical music, or any kind for that matter.

 

     Musicians are much more sensitive to and analytical

 

about music, and, interestingly, apparently use different

 

areas of their brains than do nonmusicians when processing

 

music.  Musicians do quite a bit of processing in the left

 

hemisphere, in areas that apparently process in a

 

logical/analytical manner.  Some music triggers some emotion

 

in almost everyone, although I think that perhaps mood, as

 

suggested by Giomo, would be a better term to describe much

 

of the psychophysical state that music induces.  We can

 

listen to music, know what emotion it represents, but not

 

want or like that emotion.  In this way, Rand seems right

 

that music causes our minds to go through the cognitive steps

 

which result in various emotions.  However, in line with the

 

arguments made by many, not everyone can follow the cognitive

 

steps necessary in listening to all music: there is a certain

 

amount of learning involved in the appreciation of music and

 

it seems to be related, for example, to learning the forms,

 

context, and style of the music of a culture.  Beyond that,

 

there is learning involved in absorbing and responding to

 

music of different genres: jazz, blues, celtic folk, african

 

folk, classical.  One gets to understand the ways and the

 

patterns of each genre such that one's mind can better follow

 

the musical thoughts and respond to them with feeling

 

(Aiello 1994).

 

     Music can take on a cognitive life entirely its own,

 

apart from and different from the kinds of thoughts and

 

feelings resulting from life or the other arts.  As the

 

Greeks thought, it can teach us new things to think and feel.

 

Certainly, the kind of utterly intense emotion felt through

 

exalted music is rare, if possible at all, through other

 

events of life.  Listening to contemporary music such as the

 

Drovers (Celtic style), I realized that it made me feel all

 

kinds of wonderful and unusual bodily feelings, which had no

 

regular emotional names, although they were similar to other

 

emotions.  This might explain why we like to listen to the

 

same piece of music over and over.  "Wittengenstein's

 

paradox: the puzzle is that when we are familiar with a piece

 

of music, there can be no more surprises.  Hence, if

 

'expectancy violation' is aesthetically important, a piece

 

would lose this quality as it becomes familiar"

 

(Bharucha 1994, 215).  We do not particularly like to think

 

about the same things over and over, but we generally like to

 

feel certain ways over and over.  We listen to the same piece

 

over and over because we enjoy the mood, the frame of mind,

 

into which it puts us.  Of what else does the end of life consist, but good experience, in

 

whatever form one can find it?  Thinking is the means by which we maintain and

 

advance life, but feeling happy is an end in itself.

 

     To resolve Rand 's duality:  the basis of music is the

 

neurological induction of mood through sound (made

 

possible, in my view, by our ability to respond to the

 

emotional meaning of voice);  however, humans have taken that

 

basic ability and elaborated it greatly, abstracting and

 

rearranging sound in many, many different ways in all the

 

different kinds of music.  Responding to more complex music

 

requires more elaborate, specifically musical understanding

 

of the sounds and their interrelationships.  This

 

understanding requires learning on the part of the listener

 

and complex cognitive work - to which the listener responds

 

emotionally.

 

     Hence, there are two emotional levels on which we

 

respond to music which correspond to the two aspects of

 

Rand 's hypothesis: the basic neurological level and the more

 

complex cognitive level.

 

 

 

                       Future Research

 

     My hypothesis on the evolutionary basis of music in our

 

ability to respond to emotion in tone of voice would need a

 

vast array of experiments to be proved, including further

 

inquiry into the neurological structures which process voice

 

tone and music.  Presumably, if the hypothesis is true, a

 

significant overlap would be found in the the areas that

 

process voice tone and the areas that process music.

 

Particular care would be needed to discover which neocortical

 

structures are involved in these functions, including an

 

examination of such structures as the associative areas

 

including the temporal lobe, and the limbic structures.  And

 

subcortical areas such as the hypothalamus and brain stem,

 

presumed to be involved in emotional processing

 

(Siminov 1986), would need to be examined as well.

 

     A technique such as Positron Emission Tomography (PET)

 

(12) might be useful in such an inquiry.  Experiments

 

indicating that this overlap exists in young infants would

 

show that this was an inborn, and not a learned ability.

 

Care would need to be taken in arranging several experimental

 

conditions for comparison.  Techniques such as the one

 

described earlier in this essay, wherein the verbal content

 

was filtered out of sentences, would be useful.  Comparisons

 

of the response to (1) voice with no verbal content or music,

 

(2) music with no voice, (3) voice with music, with and

 

without verbal content and (4) nonemotionally meaningful

 

sounds made without voice would be important.

 

     Also, it might be found that voice with no music, voice

 

with music, and music with no voice are each processed in a

 

different set of areas.  Alternatively, it is possible that

 

no subcortical emotional effects would be found from voice or

 

music.  Or, perhaps, the processing of the voice and/or the

 

music would be found to be spread over both hemispheres of

 

the brain in a way which did not become evident in the evoked

 

potentials.  Some of the brain damage studies found that

 

right hemisphere damage did not universally cause amusia or

 

failure to comprehend or express emotional tone, and that

 

some subjects recovered their abilities to express or grasp

 

emotion through language.  Furthermore, it is difficult to

 

know how varying individual brain organization might express

 

itself in the processing of these tasks.

 

     Interesting and observable differences might be found

 

across languages or language groups.  The relation, if any,

 

of a language to it's folk music would be fascinating (13).

 

     Here I'd like to recall Jackendorff's comments.  He

 

remarked on the ability of music to make us feel like moving,

 

and that there are specific ways we seem to feel like moving

 

to specific kinds of music.

 

     Ultimately, if we learn enough to specify exactly the relationships between the

 

elements of music and what feelings are evoked, we will be able to decipher music as

 

“the language of feeling.”  I look forward to the research which will resolve these

 

questions on the biopsychology of music.

 

 

Again and Again

 

                        Music defies.

 

                    Rachmaninoff's sighs,

                      Haydn's Surprise,

                    Joplin 's glad cries --

                      Make poetry pale.

 

                         Words fail.     

 

                        --John Enright


 

                            NOTES

 

1.  "An emotion is the psychosomatic form in which man

    experiences his estimate of the beneficial or harmful

    relationship of some aspect of reality to himself."

    (Branden 1966, 64). This definition is echoed in Carroll

    Izard's work Human Emotions (1977) "A complete definition

    of emotion must take into account all... of these aspects

    or components: (a) the experience or conscious feeling of

    emotion, (b) the processes that occur in the brain and

    nervous system, and (c) the observable expressive

    patterns of emotion, particularly those on the

    face...scientists do not agree on precisely how an

    emotion comes about.  Some maintain that emotion is a

    joint function of a physiologically arousing situation

    and the person's evaluation or appraisal of the

    situation" (1977, 4).

 

 

2. "Prosody" is pitch, change of pitch, and duration of

intonations and rests in speech.

 

3. "Pitch - 23. Acoustics. the apparent predominant frequenc

sounded by an acoustical source."  (Random House Dictionary

of the English Language, New York : Random House Publishing

Co. , 1968)

 

4.  The activites are "music-like" because they employ

sequences of sounds made by periodic vibrations. However,

because of the cognitive levels of the animals involved, the

"songs" are not abstracted, arrayed and integrated into an

artwork and thus are not music.  It is even likely that the

animals experience their "songs" as integrated perceptual

experiences, which communicate valuable information to them,

or trigger a series of valuable actions in them.  Because our

physiology is so different from that of birds and cetaceans,

we may not experience the "songs" as perceptually integrated

units, but the respective animals might.  Regardless of

whether the "songs" are perceptually integrated or not to the

birds, dolphins or whales involved, the "songs" are still not

artworks, because they are not conceptually organized

(Nottebohm 1989).  Likewise, animals usually seem indifferent

to human music.  There are at least two reasons for this:

their physiologies are different, thus they do not hear and

perceptually integrate sound the same way humans do; and they

do not have the power to abstract patterns from percepts the

way humans do.  Trehub (1987) found that, unlike animals,

even human infants process music by relational means and do

not rely on absolute pitch the way animals do.

 

5. In brain research, investigators have found evidence for

the same general types of brain processes in the same areas

for 95% of the subjects.  I am reporting the kinds of

functional asymmetries which have been discovered for those

95%.  Thus, when I note that "language functions are in the

left hemisphere and musical tone recognition in the right," I

am referring to this 95% of the population.

 

 

6.  In a dichotic listening task, the subject is presented

with two different stimuli to his different ears,

simultaneously.  Whichever stimuli the subject tends to

notice indicates that the ear to which it was presented has

an advantage for that kind of stimuli.

 

7.  "Timbre - 1. Acoustics, Phonet.  the characteristic

quality of a sound, independent of pitch and loudness but

dependent on the relative strengths of the components of

different fequencies, determined by resonance.  2. Music.

the characteristic quality of sound produced by a particular

instrument or voice; one color."  (Random House Dictionary of

the English Language, New York : Random House Publishing Co.,

1968)

 

8.  There is evidence that musicians in particular do what

appears to be more logico-analytical processing of music in

the left hemisphere (Bever and Chiarello 1974).  Messerli,

Pegna, and Sordet (1995) found musicians superior in

identifying melody with their right ear.  Schlaug and

Steinmetz found that professional musicians, especially those

who have perfect pitch, have far larger planum temporales on

their left side (Nowak 1995).

 

9.  Aphasia is a condition in which a person has difficulty

in producing and/or comprehending language due to

neurological conditions.

 

10. Polyphony is a type of music where multiple voices sing

independent melodies.  Often, the melodies selected do

harmonize beautifully, but polyphony is not considered

harmonic in the ususal sense, because it does not use

harmonic chords in its composition, but relies on the

incidental harmonization of the tones of the multiple

melodies into chords.

 

11.  "Psycho-epistemology is the study of man's cognitive

processes from the aspect of the interaction between the

conscious mind and the automatic functions of the

subconscious." (Rand 1971, 20)

 

12. Positron Emission Tomography is a technique which

measures the rate of glucose metabolism in neurological

structures during tasks.  The brain uses a tremendous amount

of glucose whenever it works.  It is inferred that brain

structures using the most glucose during a given task are the

ones performing the neurological processes necessary for that

task.

 

13. My thanks to Mr. Peter Saint-Andre for pointing out these

possibilities.

 

 

 

 

                          REFERENCES

 

 

Aiello, R. editor, 1994.  Musical Perceptions.  New York :

Oxford University Press.

 

Aiello, R. 1994.  Music and Language:  Parallels and

Contrasts.  In Aiello 1994.

 

Alperson, P. editor, 1987.  What is Music?  University Park :

Pennsylvania University Press.

 

Bear, D.  M.  1983.  Hemispheric Specialization and the

Neurology of Emotion.  Archives of Neurology 40: 195-202.

 

Berenson, F. 1994.  Representation and Music.  The British

Journal of Aesthetics 34(1): 60-8.

 

Bernstein, L.  1976.  The Unanswered Question:  Six Talks at

Harvard.  Cambridge , MA : Harvard University Press.

 

Best, C., H. Hoffman, and B. Glanville 1982.  Development of

Infant Ear Asymmetries for Speech and Susic. Perception and

Psychophysics 31: 71-85.

 

Bever, T. and R. Chiarello 1974.  Cerebral Dominance in

Musicians and Nonmusicians.  Science 185: 537-39.

 

Bharucha, J. 1994.  Tonality and Expectation.  In Aiello

1994.

 

Bowie , A. 1990.  Aesthetics and Subjectivity:  From Kant to

Nietzsche.  Manchester :  Manchester University Press.

 

Branden, N. 1969.  The Psychology of Self-Esteem.  Los

Angeles:  Nash Publishing.

 

 

Clynes, M. 1974. The Biological Basis for Sharing Emotion:

The Pure Pulse of Musical Genius.  Psychology Today 8(2): 51-

5.

 

Clynes, M. 1986. Music Beyond the Score.  Communication and

Cognition 19: 169-194.

 

Cook, Nicholas 1994.  Perception.  In Aiello 1994.

 

 

Deutsch, D.  1992.  Paradoxes of Musical Pitch.  Scientific

American, August:  88-95.

 

Enright, J. 1989.  What is Poetry?  Objectively Speaking 2:

12-5.

 

Entus, A. 1977. Hemispheric Asymmetry in Processing of

Dichotically Presented Speech and Nonspeech Stimuli in

Infants.  In Gruber and Segalowitz 1977.

 

Ekman, P. 1977.  Biological and Cultural Contributions to

Body and Facial Movement.  In The Anthropology of the Body,

J. Blacking editor, London: Academic Press.

 

Glanville, B., C. Best, and R. Levenson 1977. A Cardiac

Measure of Cerebral Asymmetries in Infant Auditory

Perception.  Developmental Psychology 13: 54-9.

 

Giomo, C. 1993.  Children's Sensitivity to Mood in Music.

Psychology of Music 21: 141-62.

 

Gordon, H. 1970.  Hemispheric Asymmetries in the Perception

of Musical Chords.  Cortex 6: 387-98.

 

Grout, D. 1973.  A History of Western Music.  New York: W.W.

Norton.

 

Gruber, F. and S. Segalowitz editors, 1977. Language

Development and Neurological Theory.  New York: Academic

Press.

 

Heilman, K., M. Scholes, and R. Watson, 1975. Auditory

Affective Agnosia. Journal of Neurology, Neurosurgery and

Psychiatry 38: 69-72

 

Heilman, K., D. Bowers, L. Speedie, and H. Coslett, 1984.

Comprehension of Affective and Unaffective Prosody.

Neurology 34: 917-21.

 

Helmholtz, H. 1954 [1885].  On the Sensations of Tone.  New

York: Dover Books.

 

Hevner, K. 1935.  The Affective Character of the Major and

Minor Modes in Music.  American Journal of Psychology 47:

103-18.

 

Hevner, K. 1936.  Experimental Studies of the Elements of

Expression in Music.  American Journal of Psychology 48: 246-

68.

 

Izard, C. 1971.  The Face of Emotion.  New York:  Appleton

Century Crofts.

 

Izard, C. 1977.  Human Emotions.  New York: Plenum Press.

 

Jackendorff, R. 1987.  Consciousness and the Computational

Mind.  Cambridge: MIT Press.

 

Joanette, Y., P. Goulet, and D. Hannequin 1990. The Right

Hemisphere and Verbal Communication.  New York: Springer-

Verglag.

 

Joseph R. 1988.  The Right Cerebral Hemisphere: Emotion,

Music, Visual-Spatial Skills, Body-Image, Dreams, and

Awareness.  Journal of Clinical Psychology 44: 630-73.

 

Kastner, M. and R. Crowder 1990.  Perception of the

Major/Minor Distinction: IV.  Emotional Connotations in Young

Children.  Music Perception 8:189-202.

 

Kessen, W., J. Levine, and K. Wendrich 1979.  The Imitation

of Pitch in Infants.  Infant Behavior and Development 2: 93-

9.

 

Kester, D., A. Saykin, M. Sperling, M. O'Connor, M.,

L. Robinson, and R. Gur, 1991.  Acute Effect of Anterior

Temporal Lobectomy on Musical Processing.  Neuropsycholgia

29(7): 703-8.

 

Kolb, B. and I. Whishaw 1990.  Human Neuropsychology.  New

York: W. H. Freeman and Company.

 

Konecni, V. 1982.  Social Interaction and Musical Preference.

In The Psychology of Music, D. Deutsch editor.  San Diego:

Academic Press.

 

Lang, P. 1941. Music in Western Civilization.  New York: W.W.

Norton.

 

Langer, S. 1957.  Philosophy in a New Key.  Cambridge, MA:

Harvard University Press.

 

Messerli, P., A. Pegna, and N. Sordet 1995.  Hemispheric

Dominance for Melody Recognition in Musicians and Non-

Musician. Neuropsycholgia 33(4): 395-405.

 

Meyer, L. 1994.  Emotion and Meaning in Music.  In Aiello

1994.

 

Molfese, D. 1977. Infant Cerebral Asymmetry.  In Gruber and

Segalowitz 1977.

 

Nietzsche, F. 1980.  Samtliche Werkes.  Kritische Studien.

Munich:  SW7 p. 364).

 

Nottebohm, F. 1989.  From Bird Song to Neurogenesis.

Scientific American.  2: 74-9.

 

Nowak, Rachel. 1995.  Brain Center Linked to Perfect Pitch.

Science 267: 616.

 

Pitt, M. 1995.  Evidence For A Central Representation of

Instrument Timbre.  Perception & Psychophysics 57: 43-55.

 

Rand, A.  1943.  The Fountainhead.  Indianapolis:  Bobbs-Merrill.

Rand, A. 1971.  The Romantic Manifesto. New York: Signet.

 

Roederer, J. 1984. The Search for the Survival Value of

Music.  Music Perception 1: 350-56.

 

Ross, E. D.  1984.  Right Hemisphere's Role in Language,

Affective Behavior and Emotion.  Trends in Neurosciences 7:

342-6.

 

Sacks, O. 1987.  The Man Who Mistook His Wife For A Hat.  New

York:  Harper and Row.

 

Samson, S. and R. Zatorre 1993.  Contribution of the Right

Temporal Lobe to Musical Timbre Discrimination.

Neuropsychologia 32(2): 231-40.

 

Schenker, H. 1935. Der Freie Satz, Universal Edition, Vienna.

 

Shapiro, L.P. and Nagel, H.N.  1995.  Lexical Properties,

Prosody, and Syntax:  Implications for Normal and Disordered

Language.  Brain and Language 50: 240-57.

 

Siminov, P. 1986. The Emotional Brain: Physiology,

Neuroanatomy, Psychology and Emotion.  New York: Plenum

Press.

 

Sloboda, J. 1985. The Musical Mind:  The Cognitive Psychology

of Music.  Oxford: Clarendon Press.

 

Sloboda, J. 1991.  Music Structure and Emotional Response:

Some Empirical Findings.  Psychology of Music 19: 110-120.

 

Stiller, A. 1987.  Toward a Biology of Music.  OPUS (Aug):

12-15.

 

Tomkins, S. 1962.  Affect, Imagery and Consciousness.  New

York: Springer.

 

Tramo, M.J. and J. J. Bharucha, 1991.  Musical Priming By the

Right Hemisphere Post-Callostomy.  Neuropsychologia 29: 313-

25.

 

Trehub, S. 1987.  Infants' Perception of Musical Patterns.

Perception and Psychophysics 41: 635-41.

 

Trehub, S. 1990.  Human Infants' Perception of Auditory

Patterns.  International Journal of Comparative Psychology

4: 91-110.

Vargha-Khadem, F. and M. Corballis 1979.  Cerebral Asymmetry

in Infants.  Brain and Language 8: 1-9.

 

Walker, S. 1983. Animal Thought.  London: Routledge & Kegan

Paul.

 

Warren, R., C. Obusek, and R. Farmer 1969.  Auditory

Sequence:  Confusion of patterns Other Than Speech or Music.

Science 164: 586-7.

 

West. M.L. 1992.  Ancient Greek Music.  Oxford:  Clarendon

Press.

 

Zatorre, R. 1979. Recognition of Dichotic Melodies By

Musicians and Nonmusicians.  Neuropsychologia 17: 607-17.

 

Zatorre, R. 1984. Musical Perception and Cerebral Function: A

Critical Review.  Music Perception 2: 196-221.

 

Zatorre, R. 1988. Pitch Perception of Complex Tones and Human

Temporal-Lobe Function.  Journal of the Acoustical Society of

American 84: 566-572.

 

Zatorre, R., A. Evans., and E. Meyer 1994.  Neural Mechanisms

Underlying Melodic Perception and Memory for Pitch.

The Journal of Neuroscience 14(4): 1908-19.

 

Zurif, E. and M. Mendelsohn 1972.  Hemispheric Specialization

for the Perception of Speech Sounds: The Influence of

Intonation and Structure.  Perception and Psychophysics 11:

329-32._




Home