Non-Segmental Phonology

  • David Crystal
Part of the Disorders of Human Communication book series (DISORDERS, volume 3)


The study of non-segmental contrastivity in language, viewed as a coherent field of study within phonology, has a much shorter academic history than has segmental phonological analysis, and consequently has so far received only limited clinical application. The field is not unfamiliar, as is clear when reference is made to such notions as “prosody”, “intonation”, “stress”, “rhythm” and “tone of voice”, all of which would be subsumed under this heading. What is novel is the integration of all of these effects within a single framework, and the systematic analysis of the relationship between these effects and other levels of linguistic analysis, especially grammar and semantics1. The importance and complexity of this field of study has clearly emerged in the process, and its neglect in the routine investigation of speech and hearing disability is more noticeable and regrettable as a result.


Lexical Item Pitch Contour Prosodic Feature Voice Disorder Pitch Range 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    For a historical review of the field, see Crystal (1969: Ch. 2). This book, along with its sequel (1975), provides the detailed account of the theoretical frame of reference used in this chapter. The sections below dealing with language acquisition are based on two further papers, ( 1978, 1979b ).Google Scholar
  2. 2.
    As in speech pathology, e.g. Travis (1957: Ch. 22), Greene ( 1964 ). For the phonetic theory underlying the distinction, see Catford (1977). It should be noted that the notion of phonation is also applicable to pitch and loudness features, and is thus less useful for present purposes, where a distinction between these and timbre effects is being maintained.Google Scholar
  3. 3.
    For example, Fries (1964), Crystal ( 1969: Ch. 1). For speech-act, see p. 201.Google Scholar
  4. 5.
    See Quirk, et al.( 1972: Ch. 14), Halliday (1967–1968).Google Scholar
  5. 6.
    For children, see du Preez (1974); adults, Cutler (1976), Leonard (1973); disability, Goodglass, Fodor, and Schulhoff ( 1967 ), Stark, Poppen, and May (1967).Google Scholar
  6. 7.
    For example, Sachs and Devin (1976) report the use of higher pitch and wider intonation patterns when 3–5 year-old children talk to a baby or doll, or role-play a baby. For the characteristics of intonation in adult speech to children, see Blount and Padgug (1977), Garnica (1977), Ferguson (1977).Google Scholar
  7. 8.
    For example, the “basic cry” pattern, underlying hunger, pain, etc. states, described in Wolff (1969: 82 ). For a recent phonetic description, see Stark, Rose and McLagen (1975).Google Scholar
  8. 9.
    See Bruner (1975: 10), Dore (1975). Weir (1962) also talks about the splitting up of utterances into “sentence-like chunks”, at this stage.Google Scholar
  9. 10.
    The difficulty with all such approaches is empirical verification of the notion of “intention”. As has been argued in other areas of child language, the fact that parents interpret their children’s prosody systematically is no evidence for ascribing their belief patterns to the child’s intuition. At best, we can argue, as does Menn (1976: 192), that “consideration of adult interpretation of intonation contour on vocalisations does give us information about what the child conveys,if not what he/she intend?.See further Crystal (1979: 41).Google Scholar
  10. 11.
    Cf. Dore, Franklin, Miller and Ramer (1976: 26), Bloom (1973).Google Scholar
  11. 12.
    For example, Keenan (1974), Menn (1976). The subsequent analysis of development is based on a synthesis of Menn (1976) and Halliday (1975), to which page references are given, work on the acquisition of tone languages (Hyman and Schuh 1974, Li and Thompson 1977, Tse 1978) and a study by the author.Google Scholar
  12. 13.
    For example, Weir (1972), Carlson and Anisfeld (1969: 118), Keenan ( 1974: 172, 178 ).Google Scholar
  13. 14.
    For example, Howe (1976), Lenneberg (1967).Google Scholar
  14. 15.
    See Bloom (1973), Clark, Hutcheson and Van Buren (1974: 49). A single-word polysyllable in principle allows for a contrast (e.g. DAddy vs. dadDl), but there is no evidence of such forms at this stage (Atkinson-King 1973 ).Google Scholar
  15. 16.
    A compound tone-unit, such as this, is in fact singled out by some analysts (e.g.du Preez 1974) as an important transitional stage.Google Scholar
  16. 17.
    See Chomsky and Halle (1968: 17 ff.) for a theoretical statement; Crystal (1969: 263 ff., 1975: 22 ff.) for a statistical one.Google Scholar
  17. 18.
    Cf. Quirk, et al.(1972), Menyuk (1969), Wode (1980).Google Scholar
  18. 19.
    See Chomsky (1969), Maratsos (1973).Google Scholar
  19. 20.
    For example, Greene (1964), Berry and Eisenson (1956), Travis (1957).Google Scholar
  20. 21.
    For example, Eisenson (in Travis [1957: 443]) seems to include both phonetic and phonological phenomena in his discussion of the notion in aphasia. Cf. also Berry and Eisenson ( 1956: 403 ).Google Scholar
  21. 22.
    See Sebeok, Hayes, and Bateson (1964), especially the papers by Ostwald, and Mahl and Schulze.Google Scholar
  22. 23.
    Cf. Crystal (1969: 256 ff.). The average length of tone-unit in the adult conversational data described there was 5 words.Google Scholar
  23. 24.
    See Van Lancker (1975: 120), Alajouanine (1956), Critchley (1970: 206).Google Scholar
  24. 25.
    They provide the basis for the prosody profile in Crystal (1982).Google Scholar
  25. 26.
    There is a certain amount of phonetic variation in the low falling tones also, not all of which would be transcribable with certainty without supplementary acoustic analysis. It is accepted that there may be some arbitrariness about the high/low distinction in such cases, and the risk of reading too much intonational structure into the data must always be borne in mind.Google Scholar
  26. 27.
    used in transcription refers to extra stress on a syllable.Google Scholar
  27. 28.
    For example, Trubetzkoy (1939), Chao (1943), Martinet (1949).Google Scholar
  28. 29.
    And also because their constituent parameters are little understood. See Laver (1980).Google Scholar
  29. 31.
    There are dialects where this rule does not apply, N.E. England, N. Ireland, N. Wales.Google Scholar
  30. 32.
    The only case where it is useful to talk of the total non-segmental effect of the speech is in patients whose altered system is liable to be a source of confusion to others about their social, regional or psychological identity. The case reported by Monrad-Krohn (1947), where the Norwegian speaker could be taken to be German, is a clear example; but less dramatic instances of changes in the non-segmental features of regional accent are common enough.Google Scholar
  31. 33.
    See the review in Laver (1970: 69 ff.), and the evidence from the tongue-slip data (tongue-slips rarely crossing tone-unit boundaries or affecting tonic placement: cf. Boomer and Laver [1968: 8–9], Fromkin [1973]). The neurological evidence concerning prosody is not entirely clear, due once again to a failure to keep clearly apart phonetic and phonological information in the various investigative procedures (e.g.dichotic listening and tachistoscopic tasks, split-brain and hemispherectomy studies). There is evidence that non-segmental effects are processed bilaterally and subcortically (cf. Van Lancker 1975). The right hemisphere is normally superior for several effects, especially when attitudinal function is involved (see, e.g.Blumstein and Cooper [19741); the left hemisphere is normally superior for certain other effects, including some tone language features (for Thai, at least: see Van Lancker and Fromkin [1973]) and certain rhythmical features (see Zurif and Mendelsohn [1972], Robinson [1977]). Unfortunately, the studies deal with both speech and non-speech (, laughter) sounds, in both naturalistic and artificial (e.g.filtered) contexts, and methodological differences abound. It is therefore difficult to draw firm conclusions; but there is a growing body of opinion that (in left-hemisphere speech dominance) the left hemisphere is superior for phonological contrasts in non-segmental effect, and the right hemisphere for phonetic contrasts. Cf. Van Lancker’s conclusion (1975: 166–167): “the types of pitch contour most likely to be processed in the dominant hemisphere would be those that carried grammatical distinctions rather than emotional ones”. If this is so, there is an important consequence for those approaches to the treatment of language-disordered adults and children which aim to use musical or quasi-musical input (e.g.Sparks and Holland 1976): they will.need to be supplemented by a more sophisticated set of phonological procedures than has hitherto been the case, in order to handle the problem of the hierarchy of intonational form (cf. p. 62), and the integration of intonation with syntax.Google Scholar
  32. 34.
    For stress, see e.g.Goodglass (1968), Blumstein and Goodglass (1972); duration, Marckworth (1976); pause, Lasky, Weidner and Johnson (1976). The importance of rhythm as a primary device to promote a patient’s sense of grouping, which must underlie tone-unit development, is a major theme of Van Uden (1970); see also Robinson (1977). For the importance of rhythm in language acquisition, see Allen and Hawkins (1978).Google Scholar
  33. 35.
    See further, Goodglass, Fodor, and Schulhoff (1967), Wheldall and Swann ( 1976 ), Bonvillian, Raeburn, and Horan (1979).Google Scholar

Copyright information

© Springer-Verlag Wien 1981

Authors and Affiliations

  • David Crystal
    • 1
  1. 1.Department of Linguistic ScienceUniversity of ReadingGreat Britain

Personalised recommendations