Monday, June 30, 2008

Looking out after someone

From the fertile linguistic field of sitcoms comes another expression that struck my ear as odd: "They're just looking out after you" (from an episode of Friends). A quick check of Google turns up the following results:

  • "look after": 15,500,000
  • "look out after": 282,000 (11 out of the first 30 results are in the sense of "look after", or 37%)
  • "looking after": 7,620,000
  • "looking out after": 157,000 (25 out of the first 30 results are in the sense of "looking after", or 83%)

So what does this mean? It definitely means the line I heard was not an isolated occurrence or malapropism. Clearly the writers or the actor intended that phrase. What puzzles me more than the phrase itself is the large discrepancy between the results using "look" and "looking". However, I'm guessing the discrepancy is not actually that high, but rather it's easier to find results of "...look-out, after" and "...look out, after" than it is to find the corresponding phrases with the progressive form. Still, to me "looking out after" sounds decidedly weird. Perhaps it's a mixture of "look out for" and "look after". As much as a quick search of Shakespeare can tell us, the expression seems to be new rather than original, since "look after" turns up 6 hits in Shakespeare, whereas "look out after" returns none.

Friday, June 27, 2008

Nasal weakening

Languages seem not to like nasals. They're usually forced to assimilate in place of articulation to a following stop, sometimes even a fricative, and often they're dropped entirely (which leads to nasalized vowels as separate phonemes, e.g., French or Brazilian Portuguese sãõ). In some languages they are dropped entirely (an areal feature of a group of 5 languages from the Chimakuan, Wakashan, and Salishan families of the northwest coast of North America). Even in languages with strong nasal functional load, nasal consonants are often weakened in some way. In American English the flapping rule applies not only to /t/ (bʌɾəɹ for "butter"), but also to /n/ (ɛɾ͂i for "any"). In Portuguese (at least Brazilian Portuguese), the palatal nasal (represented orthographically by the digraph "nh") is often reduced to a nasalized vowel + palatal glide, so that a word like "minhas" comes out as mĩjas.

If we think of this in terms of Optimality Theory (which, honestly, is what I'm doing always, even in non-linguistic things like traffic patterns and evolutionary biology), we can talk about two constraints: Exp(ressivity) (a sort of catch-all constraint I've been using half-seriously, with a definition something like "language should be able to express a speaker's thought accurately") and *Obs(truction) (something like "sounds should obstruct the vocal tract as little as possible"). The desire for expressivity is clearly what drives the robust distinctions between consonants after thousands of years. Though some will dispute it, there is an undeniable tendency for languages to simplify, espeically phonologically. This is how "want to" becomes "wanna," how "going to" becomes "gonna," and further how "I'm going to" becomes "I'ma." What holds back this march of simplicity is the Exp constraint. Simplify things too much, and people won't be able to express themselves properly, at least not without long strings of the same consonants/vowels.

So what languages seem to do is reduce nasal whenever possible. The reason this doesn't usually turn into nasal deletion or any other radical change is the necessity to use language as an expressive tool.

Thursday, June 26, 2008

Pre-glottalization in British English

As an American, one of the most striking phonetic features of British English for me is the pre-glottalization of voiceless stops, often with concurrent aspiration of a word-final stop, as in the word "caught": kɒʔtʰ. In some dialects pre-glottalization becomes replacement, as in the classic example of Cockney "bottle" as bɒʔəl. Since complete glottal closure is part of the articulation of a stop, whether or not a stop is pre-glottalized can be difficult to tell, especially in rapid speech. Teasing out the glottal closure with the articulation of the stop itself to make it a separate phonetic segment takes time and articulatory energy, and thus it is much reduced in rapid speech. Listening to a paragraph at the Speech Accent Archive (my discussion will be a lot clearer if you click on that first link and take a look at the paragraph yourself), I felt I had to disagree with the transcribers decisions in transcribing "thick," "snack," and "meet," which to be have a hint of pre-glottalization.

Much has been written of pre-glottalization, most of it by Frederik Kortlandt, but there seems to have been little attention paid to what this does to syllable structure. There seems to be a tendency to interpret a glottal stop as syllable-final, and any stops after it get thrown into the onset of the following syllable. For instance, in the paragraph mentioned above, "meet her" comes out more like miʔ.tʰɜ than mi(ʔ)t.hɜ. We can see a clear difference with the word "snake," which has no pre-glottalization. In the sequence "snake and," we end up with snejk.ʔæn(d), rather than snej(ʔ).kæn(d). An argument I made in a presentation this year was that it may be possible for word-final stops to be syllabified separately, especially after a glottal stop (I took my data from Blackfoot, but English does this quite commonly as well). As my colleague Tim Henry explained to me, the reason stops have so much force at the end of words is that without aspiration or a following vowel, it is exceedingly difficult to perceive the stop's place of articulation.

Wednesday, June 25, 2008

talk balk

I ran across an example of, for lack of a better term, a typographic echo a while ago: "talk balk" for "talk back." Certainly this could merely be a typographic error. Under the influence of "talk," the typist all but duplicated the word by changing the "c" in "back" to an "l." However, I do not think this is the case. In fact, it was difficult for me to even type the phrase "talk balk" (I ended up with "baclk" at first), because it isn't a phrase that's ever used, as opposed to "talk back." More likely this is the kind of error someone would make in speech, and since (at least for me) phonology strongly influences typing errors in people who are fairly fluent typists, and it was translated to the screen without the speaker/typist noticing.

There is clearly a cross-linguistic tendency for sounds to want to assimilate to their neighbors. Almost every language has a nasal assimilation rule that prevents clusters like nk, instead turning these into ŋk clusters, even across morphological boundaries (ɪŋkəmplit for "incomplete) and sometimes even word boundaries (ɪŋ kamən for "in common"). These preferences can also skip over segments, as in vowel harmony or the famous tongue twister "She sells seashells by the seashore." I think that may be the most likely explanation for the error "talk balk."

Tuesday, June 24, 2008

Verbing in American English

If there's one thing we love to do in English, it's verbing. In case my usage isn't wholly transparent, I mean creating verbs out of nouns simply by reclassifying them, as opposed to applying any derivational morphology. The title of this post refers to American English because that's my dialect, but I expect it's widespread in the English language, and for all I know many others. Recent examples that come to mind specifically have to do with web sites: to google something, to mapquest something, to youtube something. We love to simply take a noun (especially a proper noun) and just use it as a verb with no special morphology. Case in point: just now one of the how to's of the day on my iGoogle page was "How to network." In fact, it's a little difficult to think of network not being a verb, but it resulted from the noun. Thus the verb is "to make something a network." The OED records the first instance of network as a noun in 1530, with the computer usage coming about in 1962. The verb network, on the other hand, first appeared in 1845, with the computer usage coming about in 1982. In both cases the noun appeared, followed later by the verb, even though in the case of the computer usage, the use of network as a verb was already well-attested.

The example that prompted this post was my use of JSTOR (an online repository of scholarly articles) a few weeks ago. I was searching for something which returned no results, and as a helpful tip JSTOR told me that I may have gotten no results because my search "may have been ANDed instead of ORed." Here we have even lowly conjunctions being used as verbs; JSTOR was telling me that the AND operator was probably used instead of the OR operator, resulting in far fewer hits.

The ease with which we do this sort of thing raises the question of how different nouns and verbs really are at the underlying level in the lexicon. There are plenty of papers on the noun/verb debate in Salishan and Wakashan languages of the American Northwest, in which many words can be used as either noun or verb merely by applying noun morphology or verb morphology (NB: I do not mean adding derivational morphology to derive a noun or a verb, I mean simply adding tense/aspect/mood inflection or person/number inflection). Of course, in English we don't even deal with morphology. The difference is indicated entirely in the syntax and semantics.

Monday, June 23, 2008

putting the light verb phrase to work

I ran across a construction recently that struck me as decidedly odd, in reference to some new medication: "it can cause you side effects." Granted, this wasn't in an advertisement or official medical literature, but it clearly made sense to the person typing it. For me this is a decidedly marked construction, even though we use it all the time in other contexts, e.g., "I gave her the book." I'm not at all versed in modern syntax, but my understanding is that it is the so-called light verb phrase that is at work here. The idea is that IP immediately dominates this vP, the bar-level category of which immediately dominates a regular VP. The verb is then base-generated in the V node, but moves up to the v node, as shown in the following (generated using RSyntaxTree by Yoichiro Hasebe):

Of course, knowing all this still doesn't tell us why the construction is odd. I think the reason is probably because the verb "cause" simply doesn't usually have a valency this high. Often it takes a clausal complement ("I caused him to drop the ball"), and when it doesn't it usually has a single internal argument ("This drug can cause side effects"). When I try to rephrase "cause you side effects" I don't really get anywhere: cause side effects to/in/for you. Any of the prepositions still sound marked to me. So I think this is simply a case where the verb manages to have a higher valency for someone else than it does for me. If my idiolect allowed "cause" to have a third argument, I don't think I'd find the double-object construction odd. But since it doesn't, I do.

Sunday, June 22, 2008

the NoCoda constraint

As I mentioned last week, there is a cross-linguistic tendency to assign as many consonants as possible to an onset when dealing with a consonant cluster (review: syllables have three parts - the onset, which consists of the initial consonant or consonant cluster, the nucleus, which consists of the vowel or diphthong in the syllable, and the coda, which is the final consonant or consonant cluster; NB: this is a simplified description). The reason for this is two complementary cross-linguistic trends: syllables like onsets, and they don't like codas. These trends are formalized in Optimality Theory as the constraints Onset (syllables should have an onset) and NoCoda (syllables should be open).

If these were the only two relevant constraints, there wouldn't be any syllables with codas, because all of the consonants would get piled on the onset of the following syllable. However, other preferences override these constraints in certain situations. Take the word "constraint," for example. The word has the massive "nstr" cluster in the middle. If all we cared about was getting rid of codas, we'd shove all those consonants onto the second syllable, giving a syllabification of co.nstraint. Go ahead, try pronouncing that. It's not very fun. The reason is that there is another constraint at work here, called sonority sequencing. This refers to the general tendency of onsets to increase in sonority and for codas to decrease in sonority (I'll leave an explanation of sonority for another post, but generally speaking, in order from least to most sonorous, sounds are classified this way: stops < fricatives < resonants < vowels, with voiced sounds being more sonorous than voiceless sounds within each category). In OT this constraint is Son-Seq (sounds increase in sonority moving from syllable margin to syllable nucleus). In the "nstr" sequence, the n is higher in sonority than the s, which means putting them together in an onset violates sonority sequencing. Since in English Son-Seq is ranked higher than NoCoda, we would rather syllabify the word as con.straint than co.nstraint.

The reason I'm talking about this at all is from having set our DVR to record the show Good Eats on the food network. The host of the show, Alton Brown (who is also the announcer/narrator for Iron Chef America) has a formidable NoCoda ranking, and almost always assigns many more consonants to onsets than most of us would ever want to. An example of this is his syllabification of the word "fifteen." I say fɪf.tʰin, with aspiration on the t because it begins a stressed syllable. However, Mr. Brown says fɪ.ftin, a syllabification distinguishable from my own pronunciation by the last of aspiration on the t, which signifies that it is not syllable-initial (compare pʰɪt with spɪt). This unusual syllabification violates sonority sequencing (remember that stops are less sonorous than fricatives), but he prefers it because it results in only one NoCoda violation instead of two.

Another example of an extremely high-ranking NoCoda constraint is the syllabification of a word-final coda with the following word, even in careful speech. I wish I could remember the token Alton produced to make me think of this, but any random example will do. Usually in rapid speech, with a phrase like "farm aid," we say far.maid, because of the constraint *Complex (syllable margins should be simple). Our Align-Morph-R constraints are being violated here, since the first morpheme is being split between two syllables, but in rapid speech we generally prefer violating that to violate *Complex, which is an articulatory constraint as opposed to a semantic one. However, I noticed that even in careful speech, Alton Brown continually violates Align-Morph-R, because he would always rather fulfill phonetic and articulatory constraints over more abstract semantic ones.

Friday, June 20, 2008


The word "eggcorn" was coined by the good folks at Language Log (by Geoff Pullum, to be specific), and refers to the substution of an analyzable word or phrase for one which is obscure or archaic. The eponymous example is "eggcorn" for "acorn." Since "acorn" is monomorphemic (and apparently obscure in someone's dialect), a woman insisted that the word was "eggcorn," presumably because acorns resemble small eggs. As much as wikipedia insists that eggcorns and folk etymologies are completely separate occurrences, I think there are obvious similarities. Folk etymologies are more technically only found throughout a society, as opposed to in a single idiolect, but the idea behind them is the same: making sense of an otherwise nonsensical word or phrase. An oft-cited example is "sparrow grass" for "asparagus" (though personally I've never heard someone say it).

There is a whole rash of words that arose via the misparsing of articles and the words they attach to. A different process perhaps, but again, one that stems from a common source: the belief of speakers that they understand the origins or a word or phrase. Examples include "an apron" for historical "a napron," "an orange" for "a norange," "an asp" for "a nasp," etc. One common example that I think we all run into from time to time is "a whole nother." People almost never write this, opting instead for "a whole other," because they know that the word is "other," not "nother." The astute observer may notice that in all the above examples the /n/ is moving from the article to the noun, and never the other way around. This is due to the tendency of syllables to have onsets. Phonetically, when presented with the sequence VCV, people will almost exclusively parse it as V.CV, even if morphologically it is [VC][V]. This is formalized in OT as the constraint Onset (syllables should have onsets).

One final discussion: spelling pronunciations and pronunciation spellings. Spelling pronunciations occur quite often, and are simply pronunciations based on how a word is represented orthographically as opposed to what the historical or more common pronunciation is. A great example of this is the [r] in Burma. The name of the country was more accurately pronounced ba:ma, and in non-rhotic British English, the way to signify vowel length was to add an "r". For them, of course, it would yield something approximating the correct pronuncation, but for us rhotic speakers it creates a non-underlying [r] sound.

A great example of a pronunciation spelling I ran across recently is "pubity" for "puberty." Presumably the person lives in New England, in which case "puberty" and the hypothetical "pubity" would be pronounced the same: pʲubəɾi.

Wednesday, June 18, 2008

There's as an existential quantifier

Recently (or maybe this is an example of the recency illusion) I've noticed more and more people using "there's" with plural items. This is not to say I just heard it yesterday, but I remember not having ever heard it before, say, 5-10 years ago. Clearly it was around earlier, because I overheard it in an episode of Friends from their first season, which I believe was 1992 or 1993. A quick google search turns up a movie called "For Every Man, There's Two Women", which dates from 1984. And, in fact, a search of Shakespeare turns up 3 hits (e.g., "There's two or three of us have seen strange sights" Julius Caesar, I, iii). I was, however, unable to find an example in Chaucer, so it could be a mere 500-600 years old. Clearly this is an example of the recency illusion, the tendency of people to believe things they have just heard (or more often just noticed) are new to the world.

I believe what's going on here is not merely laziness. Prescriptivists love to jump all over linguistic innovations, pointing out how they are vague, lazy, or just downright immoral. In many instances, these critics couldn't be any more wrong. Most often linguistic innovations arise because people have a desire to express themselves, and want a better and more succinct (and often LESS vague) way to say what they're thinking. A good example of this is "like," as in "John saw Steve hit Mary and was like 'What the hell?!'" Critics would probably have this utterance rephrased as "...and said, "What the hell," or "...and thought, What the hell. The problem occurs when John neither said nor thought this. The use of the word "like" conveys an emotion via a descriptive phrase, and there is simply no other way to do this in the English language. I'm a very conservative like user, because to me it is marked and sometimes, when used in excess, the subject of contempt. However, I do use "like" in all situations like the above (of course this is in addition to the "normal" uses of like as just demonstrated), because it's the best way of expressing myself.

So why would someone say "There's two pencils on the table"? I would wager not because they're stupid or lazy. I think the most likely possibility is that the contraction "there's" has ceased to be a true contraction, and instead has become a sort of existential quantifier that signifies "There exists some x," where x is a state. In the above example, the state is "two pencils are on the table." Many more people would say "There's two pencils on the table" than would say "Two pencils is on the table," so clearly there is a perceptual difference. People aren't looking at "there's" as a verb, but rather a mathematical or logical operator. Now, this works fine in a predicate logic framework, but I wouldn't want to try to explain it in current syntactic theory. I'm sure someone could, though, so if you're so inclined, please share. Also, if someone can antedate there's with plural argument to before Shakespeare, that would be interesting to see.

Tuesday, June 17, 2008

more morphological reanalysis

To refresh, morphological reanalysis is the treatment of a given phrase as a single set lexical item, specifically for the purposes of stress assignment and prosody. The token that got me thinking about this again was Conan O'Brien talking about Krispy Kreme a couple weeks ago. He pronounces it ˈKrispy ˌKreme, with the accent on the first word, whereas I have the more conservative ˌKrispy ˈKreme, which is essentially what I would say if I were talking about cream that had somehow become crispy.

However, I'm usually pretty liberal when it comes to morphological reanalysis (i.e., I usually treat a set phrase as a single prosodic word). One example I run into often is my ˈgreen ˌbeans, versus my wife's more conservative ˌgreen ˈbeans, the same thing anyone would say when confronted by a random bean which was green in color.

Occasionally I get confused when reading lexical items when I get the stress wrong because of this process. For instance, I was shocked to see how high the deˈfault APR was on a credit card offer I received in the mail, until I realized that it was actually the ˈdefault APR (the APR you receive if you default on a payment). One other instance was a dictionary entry that caught my eye when I was looking up something the other day: safety orange. I assumed this was ˈsafety orange, some delicious variety of the fruit with which I was unfamiliar. It is, of course, ˌsafety ˈorange, the color of traffic cones and hunting vests.

So, which pronunciations do you guys use?

Monday, June 16, 2008

Optimality Theory

The question: why does Ben Barnes (who plays Prince Caspian in the new Chronicles of Narnia movie) fail to aspirate the /k/ in Prince Caspian?
The answer: He ranks MOP above Align-Morph-R.

Don't worry, all will be explained. This will be the first of several posts dealing with the most recent theory in phonology: Optimality Theory, or OT for short. The idea behind OT is a simple and universal one, based on constraints. We all are familiar with constraints from our everyday lives. Would you rather spend that $20 on a movie or dinner? That's exactly what OT does for linguistics, except instead of figuring out how to spend money it's trying to figure out (in phonological applications) what the surface realization of an underlying form will be.

OT presents constraints as realizations such as Onset (which states that syllables should have onsets) and *Comp (which states that syllable margins should be simple, e.g., "string" would violate this because of the complex "str" cluster at the beginning). The problem, of course, is that we can't get everything we want all the time. If you only have $20, you can't spend $20 on dinner and then $20 on a movie. You have to pick one. Here's an example from OT.

One constraint is the Maximal Onset Principle, or MOP (it's questionable whether this is a necessary constraint; most likely NoCoda and S(onority)S(equencing) render it irrelevant in all cases, but we won't deal with that now). This states that if there's a question whether to assign a segment to the coda of the preceding syllable or the onset of the following syllable, you should do the latter. In the word "instance," we could syllabify it as ins.tance or in.stance, and MOP says we should pick the latter. Another common constraint is Align-X, where X can be L for left or R for right. It states (in the most vague definition possible) that things should be aligned with other things. This can apply at any level of abstraction. We are most concerned with Align-Morph-R, which I will define as "The right edge of a morpheme should be at the right edge of a syllable."

In the example I gave at the beginning of the post, these are the two relevant constraints. So, why do I aspirate the /k/ in Caspian while Ben Barnes doesn't? The MOP constraint wants us to assign as many segments as possible to the onset of the second syllable of Prince Caspian: prin.skaspian (clearly I'm not using IPA, I'm not going to venture into Unicode yet; that's for later this week). However, the morphemes are [prins] and [kaspian], so this syllabification violates Align-Morph-R, because the right edge of the morpheme [prins] in the onset of the following syllable. If we syllabify the phrase as prins.kaspian (I'm ignoring syllable boundaries in Caspian), we're violating MOP, because "sk" is a perfectly valid onset in English (school, escape, etc.). We can't fulfill both constraints, so we have to choose one. I choose [prins.][], because for me Align-Morph-R is ranked higher than MOP, i.e., I would rather have my morphemes lined up with syllable boundaries than assign as many segments as possible to onsets. Ben Barnes is the opposite; MOP is very important to him, so he chooses to violate Align-Morph-R: [prin.s][].