#functionalist

LIVE

The Performance Grammar Correspondence Hypothesis (PGCH) was put forward by John Hawkins (2004) as an explanation for why grammatical patterns and the frequencies of those patterns cross-linguistically are the way they are.

In essence, it says that linguistic constructions which are easier to process are more likely to be grammaticalised. Conversely, those which are harder to process are less likely to be grammaticalised. Furthermore, processing ease is hypothesised to underlie our preferences for certain constructions over others (where there is competition between constructions) in usage. Linguistic performance thus shapes the grammar.

Hawkins suggests that there are three principles behind the hypothesis. Simplifying horrifically:

Minimise Domains: this basically means make the distance between elements which go together syntactically and semantically as small as possible, e.g. if an adjective goes with a particular noun, put them as close together as possible.

Minimise Forms: this basically means make those elements mentioned above as small and as meaningful as possible, e.g. consider spoken English “I’mma be there” where “I am going to be there” has very much had its form minimised.

Maximise Online Processing: this basically means arrange those elements in such a way that a listener will be able to process the structure of what you’re saying in the most efficient way possible. This involves making structures easier to recognise but also avoiding potential misinterpretations of structure, e.g. “I looked the number up” – consider where you place the “up” as the object gets longer. “I looked the number of my friend who just moved in next door up” vs. “I looked up the number of my friend who just moved in next door”. If the object is going to be very long, it is better to put “up” straight after the verb so that the verb (and its idiomatic meaning) can be recognised sooner. When the object isn’t so long, as in “I looked the number up,” efficiency isn’t greatly affected.

Note that language users flout these principles all the time, e.g. for stylistic effect, and are not consciously aware of them.

Using these three principles, Hawkins’ theory makes some very strong and interesting predictions about the types of patterns found in the languages of the world, and about which patterns are more likely or unlikely to be found.

Reference

Hawkins, J. (2004). Efficiency and Complexity in Grammars. Oxford: Oxford University Press.

Scotttrembls raised an interesting point: “Do you know if there’s any evolutionary relationships between SVO, SOV and VSO languages? The evolutionary explanation never seems to come up- has this already been disporved or do we not understand enough about language evolution?”

There’s no evolutionary relationship in the sense that all SVO languages are genetically related and separate from all SOV languages etc. SOV, SVO and VSO languages are distributed throughout the world and are found in many different language families. But we know that languages can change types over a period of time so, in this sense, there are evolutionary paths from one type to another. For example, Old English and Latin are considered to be canonically SOV languages but their descendants (English and the modern Romance languages) are SVO languages. You might wonder when an SOV language stops being an SOV languages and becomes an SVO language. You have to bear in mind that these types refer to canonical structures, languages may use other structures at the same time but their use will be more restricted (although there are languages which many would characterise as being ‘free word order’ in which case they would not fall into any of these categories). For example, English is canonically SVO, but English uses other word orders for questions, focus structures etc. So the relative frequencies of particular structures within a language may change over time resulting in what appears to be a single type-switch.

Work on implicational universals (universals of the form which says if a language has structure X then it will have structure Y) initiated by Joseph Greenberg and taken further by John Hawkins makes some interesting predictions for language change. Greenberg’s formulations were for the most part tendencies, i.e. if X then Y significantly more often than not, but Hawkins aimed to identify exceptionless universals which often involved adding extra conditions, i.e. if X then, if Y then Z. This places more constraints on the forms languages can take but it also makes strong predictions about evolutionary paths of language change. The reasoning is roughly: if these formulations hold for the present situation and if there is no reason to assume things were any different in the past then languages can only move through allowed ‘states’ as determined by the strong implicational universals.

We understand enough about the evolution of some language families to be able to test these predictions and the predictions have been largely correct so far. However, many would not take this evolutionary picture to be an ‘explanation’, rather it is seen as a ‘description’ of the facts which allows us to characterise possible evolutionary paths of change and distinguish them from impossible ones. Given that each stage of a language is a present-day language in its time, it is still ultimately up to the explanations offered by formal and functional approaches to account for the form a language takes at any particular point in its evolutionary history.

When people study language typology they study the ways in which languages vary. However, it’s more than just saying different languages use different words or these languages use very similar sounds. We study the ways in which structural features of languages differ (or are similar) and many go further asking questions about what the limits of linguistic structural variation are.

English speakers will know that in a simple transitive clause we start with the subject followed by the verb followed by the object, e.g. ‘Bob (S = subject) likes (V = verb) pizza (O = object)’, i.e. English has typically SVO word order. But are there other ways of arranging such a structure? Logically there are six ways: SVO, SOV, VSO, VOS, OSV, OVS. The next question that a typologist will ask is how are languages distributed across these possibilities. As a null hypothesis we might think that we would expect to find roughly equal numbers of languages in each group, but this is not what we find at all. SVO and SOV account for around 85% of all languages (with SOV being a bit more frequent than SVO). Adding VSO languages brings the total to around 95% of all languages. The question is: why is the distribution of languages so skewed?

Three broad types of answers suggest themselves as candidates (at least to my mind):

1)     It could be down to chance – the distribution of languages today may represent a highly skewed sample. If we came back in 1,000 years we might see a completely different distribution. This approach is obviously not taken by language typologists. There is certainly something interesting about the distribution which demands an explanation. To write the pattern off as due to chance would be to miss potentially significant insights into the ways languages are structured and shaped.

2)     The formal aspects of human language (perhaps as encoded by Universal Grammar) constrain the surface forms that human languages can inevitably take, i.e. variation is not limitless though it may be apparently vast.

3)     The functional pressures that act on speakers and hearers every time they use language will affect which forms languages will prefer to take, i.e. structures that are easier to say and to comprehend will be preferred and so will come to dominate amongst the languages of the world.

Given the great success of generative linguistics in the past few decades, (2) is a very popular approach to take. However, many intuitively feel that the approach in (3) is ultimately more satisfactory as an explanation. Personally I’m inclined to think that if we can explain surface variation in terms of performance preferences, this is a good thing because it means there is less for the formal approach to account for. Furthermore formal aspects of language are most often thought to be all-or-nothing affairs. If a grammar rules out a particular structure, that structure cannot exist, whereas if performance factors disfavour a particular structure, that structure will be either non-existent or rare.

But are (2) and (3) incompatible? You might think so given the distinction that’s often made between competence and performance. Many would not consider performance factors as relating to language proper – it is extra-linguistic and not something the linguist should be looking at. But the fact is that all the (overt) language that we use to construct theories of both competence and performance is being ‘performed’ in some way (either spoken or written or signed). I think there may well be limits on variation set by formal properties of human languages (which will account for some of the totally unattested structures) but others will be set by performance. And then maybe others that are to do with physics and biology more generally (here I’m thinking more of phonological typological patterns).

For now then it may be useful to adopt either (2) or (3) as an approach to language typology with the aim of seeing how far they can go, but always with the ultimate aim of putting the two together in the end for a more comprehensive account of why languages are the way they are.

loading