#generative grammar

LIVE

In the most recent versions of Chomskyan theory, Merge plays a central (if not the central) role. It is the only structure building operation available in the language faculty. This differs from earlier versions where Move was considered to be a separate structure building operation but Move has since been reconceived as a different type of Merge.

The Minimalist Program has reduced the architecture of the language faculty to the bare essentials (referred to as the ‘(virtually) conceptually necessary’ components). This means that there is a lexicon, a structure building computational system and (at least) two ‘interfaces’ with other cognitive systems (one semantic, the other phonological, broadly speaking). Items are selected from the lexicon and copied into the Numeration if they are to be used to construct a sentence. The Numeration is like a holding bay.

Merge, the structure building operation, takes two items and forms a set, i.e. X and Y merge to form {X,Y} (the theory also involves labelling the set but I’ll ignore that bit). Now, when I said above ‘a different type of Merge’ I did not mean that the operation itself varies, rather the difference between the types of Merge lies in where X and Y come from. There are three possibilities.

1)     X and Y both come directly from the Numeration.

2)     Either X or Y but not both comes directly from the Numeration.

3)     Neither X nor Y come directly from the Numeration.

Option (1) is the type of Merge that gets structure building started. Without (1) there would be no structure.

Option (2) is the type of Merge called External Merge (EM) because one of the merging items is from the Numeration, i.e. comes from somewhere external to the structure that has already been built. Option (2) allows the structure built by option (1) to be extended by merging further items to already existing structure.

Option (3) is the type of Merge called Internal Merge (IM) and this is the current conception of movement. When an item moves, it is going from one place in the structure to another so the items that are merging both come from somewhere internal to the structure that has already been built.

Note that this assumes there is only one monolithic Numeration. If we wanted to merge two existing structures, we would have to add to the options above or modify our assumptions about the nature of the Numeration.

max1461:

possessivesuffix:

Breaking linguistics news

Poverty of Stimulus has been disproven

Not even just by observing language learning, but by actually creating a program that has no innate knowledge of language, and then demonstrating that it can learn syntactic structures regardless:

https://www.pnas.org/content/119/5/e2021865119

Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we describe a learning system that is maximally unconstrained, operating over the space of all computations, and is able to acquire many of the key structures present in natural language from positive evidence alone. We demonstrate this by providing the same learning model with data from 74 distinct formal languages (…) The model is able to successfully induce the latent system generating the observed strings from small amounts of evidence in almost all cases (…) These results show that relatively small amounts of positive evidence can support learning of rich classes of generative computations over structures.

Developed in response to known Chomsky Defenders (Norbert “Chomsky is never wrong” Hornstein was mentioned in “release notes” on Twitter) claiming that earlier studies that show learning choosing between different models would only show humans to innately know a whole selection of models and have the ability to pick between them.

If we, however, have the ability to consider any possible model, then the ability to learn a language is just a corollary of the general human ability to learn stuff:

More broadly, there are many domains outside of language where learners must essentially acquire entirely new algorithms — some of them describable with similar machinery to language. It is ordinary for children to come to know new computational processes in learning tasks like driving, cooking, programming, or playing games. (…) Children simply must have the ability to learn over a rich class of computational processes, an observation that draws on well-developed theories in artificial intelligence about how search and induction can work over spaces of computations

— Hence, as per anthropological linguists, language must be considered a social technology, and not its own hard-wired “module”. The basic feature that allows humans (but not any random monkeys or birds etc.) to learn a language is not anything at all specific to language, but the “social instinct” that we should learn something from others and not just rely on our own intuition. Brings to my mind the important observation from studies on language learning attempts by parrots, corvids or apes, that they might very well learn to produce statementsorrequests; but they never learn to ask questions.

I haven’t looked at this article in much detail yet so I’m not going to weigh in on its specific contents at the moment, but I think a little bit of caution is warranted here: while this may serve as (an element of) an effective refutation of the Poverty of Stimulus argument, that is not enough to conclude that no innate language capabilities exist in the brain. Language is immensely useful, and if it has been around as long as we think it has (>100k years), that’s plenty of time and plenty of selective pressure for the brain to develop at least some adaptations specifically for accommodating language. There’s a definite possibility that at least some features of human language, perhaps even very fundamental ones, utilize specialized pieces of neural architecture, even if other parts represent social technologies primarily transmitted through culture.

In fact, we basically know this to be the case: vocabulary and grammar obviously represent culturally-transmitted aspects of language without which it could not function. The very unique structure of the human vocal tract and the neural architecture which allows us to use it so readily for speech production are obviously innate, appear to be specifically adapted for speech, and represent an integral part of all non-signed language, without which they could not function. So the question is one of degree: what’s innate and what isn’t? Refuting Poverty of Stimulus itself just does not answer this for us, one way or the other.

Yeah, this is not a general-purpose innateness counterargument, e.g. that we have a vocal tract that has evolved to be used for speech seems quite clear. A certain very weak sense of innateness, i.e. “humans have the ability to learn a language” (while e.g. animals do not) is of course true, but also an absolutely banal truth, known to ~everyone and discussed since antiquity.

The reason this is important news is because

grammar obviously represent[s] culturally-transmitted aspects of language

is not generally accepted by the generative grammar movement! They have very explicitly held that (certain parts of) syntax is unlearnable and therefore innate (“universal grammar”). That the detailed proposals for this have looked very much like English in particular has been widely seen a reason for viewing it with suspicion, but the theory has been fiercely defended for decades regardless. Chomsky himself has some decades ago moved on (for reasons that many find unclear) to the new theory of Minimalism, which strips off most previous claims about Universal Grammar to just one innate syntactic operation of Merge. Followers of his 50s–80s work however still remain too, who continue to insist that Universal Grammar is rather complex and also English-like in structure.

Some of these defenses have been seen by many as sufficiently general counterarguments that they would render the theory basically unfalsifiable. Still seems to be falsifiable by this new method, though!

An auxiliary argument has been that universal grammar is also language-specific: that it would exist only for the processing of language, not of anything else, and would be disjoint from our general-purpose cognitive capabilities. Likewise falsified here.

loading