#episode followup

LIVE

Going back and doing some cleaning as we get ready to do new things, I realized I never posted our extended discussion on event semantics here! So if you want some good discussion of how we can best capture exactly what verbs are doing semantically, continue on here.

Back in our episode about event semantics, we argued that we’ve actually been thinking about verbs all wrong in terms of how we treated them semantically. Up until now, we’ve treated verbs as though they were kind of like naturally occurring versions of the predicate symbols you find in the artificiallanguage of logic. So we could model the meaning of a verb like “travel” by taking advantage of this correspondence.

    (1)    “travel”    ≈    Txy

Those variable symbols “x” and “y” following the predicate capture the fact that a verb like “travel” tends to combine with both travelers and destinations, in order to produce complete sentences. Borrowing letters nearer to the beginningof the alphabet to stand in for specific people and places, replicating those completed sentences in logic is fairly straightforward.

    (2)    “Athan traveled to London”        ≈    Tal

After giving it some thought, though, we decided that a better way to think of the meaning of a verb like “travel,” along with other verbs, is not to treat it as a description of an individual, or even a relation between individuals and locations, but a property of events. Using this new logic, the meaning of “travel” would look more like “Te,” signifying that the act of traveling really refers to a set of coordinates in space and time — that is, an event. The verb “travel,” then, picks out any and all instances of traveling, in a way similar to how the noun “scientist” picks out, well, anyone who’s a scientist!

    (3)    “travel”    ≈    Te

In our updated system, the meaning of “Athan traveled to London” now looks like a different kind of logical statement — one that’s existentially quantified. That means it’s a statement saying there’s at least one event out there in the world that fits the bracketed description following the backwards “E.”

    (4)    “Athan traveled to London”        ≈    ∃e(Te ∧ Aea ∧ Del)

In particular, it says there exists an instance of traveling, and that the person doing the traveling — the agent — is Athan, and that the destination is London. As you can see, the job of introducing other information, either about who did it, or how it happened, falls to other constituents. (We left it open exactly how the events described by verbs become existentially quantified, but a good first guess is that it’s tense that does the job, alongside locating the event in time.)

In making this move, we’ve ended up changing the typical verb’s argument structure, which is something we’ve touched on before. It means that we’ve fiddled with the number and kinds of things that any given verb is built to combine with. A verb like “hallucinate,” on a conceptual level, doesn’t combine directly with its subject anymore; instead, it’s meant to combine with an event, with the subject coming in obliquely. In the language of the lambda calculus, which is especially useful for keeping track of what combines with what, the meaning of “hallucinate” shifts over from (5a) to (5b) —  from a function that accepts individuals as its input, to one that accepts events.

    (5a)    λ x . x hallucinated

    (5b)    λ e . e is an event involving hallucination

But it seems fair to ask whether this applies across the board: are all verbs created equal? Well, consider the following two sentences.

    (6a)    There’s a man drinking whiskey

    (6b)    *There’s a man liking whiskey

While the first sounds perfectly natural, the second seems off. And intuitively, the most noticeable difference we might point to between these verbs is that the first describes something done over a shorter period, while the second applies to an individual over a longer stretch, maybe even a lifetime. A bit more technically, linguists have attributed the split behaviours of these words to the fact that “drink” is a stage-level predicate, while “like” is an individual-level predicate.

Digging even deeper, it’s been recognized that more punctuated events (e.g., arriving, noticing, exploding) form only a small part of a much larger category of eventualities, which isn’t just some undifferentiated mass, but also includes things like lengthier processes(e.g., speaking, walking, sleeping) and even longer-term states(e.g., knowing, owning, loving).

The underlying difference, then, between stage-level verbs like “drink” and individual-level verbs like “like” might simply be that they apply to different kinds of eventualities (i.e., processes vs. states). It’s even been suggested that individual-level words dispense with event-based logic entirely, and work more like the predicates found in classical logic. This would actually go a long way in explaining why expressions of time and space, which we supposed place restrictions on events, don’t go well with every verb.

    (7a)    Katarina often speaks German

    (7b)    *Katarina often knows German

    (8a)    Ted is being held captive in the facility

    (8b)    *Ted resembles his father in the facility

It could also explain why perception verbs like “see” and “hear” and “smell,” which seem to be looking to combine with something fairly ‘eventive’ in nature, get along with some verbs much better than others.

    (9a)    Cassandra saw James talk to himself

    (9b)    *Cassandra saw James love her

So the innovation of event semantics not only gives us a way of explaining exactly how adverbs and prepositional phrases fold into sentences, as we spelled out in the episode — it can also account for the fact that verbs regularly contrast with each other regarding the sorts of sentences we find them in to begin with. And having only really scratched the surface, you can be sure we’ll have a lot more to say about this topic in episodes to come!

The comparative sentences that we talked about in our episode on grammatical illusions, like in (1) below, are surprising because of how far away people’s first impressions tend to sit from reality.

     (1)    More people have been to Montreal than you have.

When you give it a moment’s thought, it becomes clear there’s a sizeable gap between how sensible it seems at first glance, and how little information it actually communicates.

Even the illusory sentence in (2a) below, which was rated in experiments as being nearly as acceptable as the perfectly ordinary sentence in (2b), still falls apart when you try to put its pieces together. Spelled out, its meaning ends up as something like “how many girls ate pizza is greater than how many we ate pizza,” which doesn’t quite work; pronouns, even plural ones like “we,” can’t easily be combined with counting expressions like “how many.”

     (2a)    More girls ate pizza than we did

     (2b)    More girls ate pizza than boys did

But we still manage to interpret these sentences, in a way that fits the machinery made available by our mental grammar. As we discussed in the episode, the fact that we’re also able to count how many times something happened, in addition to how many there are of something, gives us a kind of half-working backdoor into understanding them. But, there’s another kind of illusion that’s even more striking, where there really isn’t any way at all to make sense of it. Try reading the following sentence aloud.

     (3)    The patient the nurse the clinic had hired met Jack

It seems pretty run-of-the-mill, pretty boring … except when you try to work out who’s doing what to who! It’s clear enough that the patient met Jack, and that the clinic’s doing some hiring, but what’s that nurse doing in the sentence? What’s his or her relationship with the patient? Or Jack? The nurse is just kind of … floating there, not really doing anything at all!

To make what’s going wrong more obvious, have a look at the simplified structure below. Each clause, whether it’s the overall sentence or an embedded one, has to have one subject noun phrase and one predicate verb phrase. Three clauses means three of each, but plainly, one of the verbs is simply missing in action!

image

This problem becomes unavoidable when we trim our tree and take out that lowest clause, giving us “The patient the nurse met Jack,” which really can’t mean much of anything. In fact, it’s a violation of a basic condition on the shape that sentences can take, known as the Theta Criterion, which essentially demands that verbs and their subjects have to match up one-to-one.

image

Sentences like the one in (3), then, end up forming a class of grammatical illusions that result from the so-called missing-VP effect. And as remarkable as they might seem already, things get even stranger when we consider their supposedly grammatical counterparts. Take the modified version below, with the missing VP put back in its place.

     (6)    The patient the nurse the clinic had hired admitted met Jack

While everything’s where it should be, the sentence has now become just about impossible to follow — or, at least, a lot harder to understand on a first pass than the simplified version in (7), where the lowest clause has once again been pruned from the tree.

     (7)    The patient the nurse admitted met Jack

This difficulty with understanding an otherwise perfectly grammatical sentence — at least, according to the rules we know about — is thanks to a phenomenon that’s been pondered over since at least the 1960s: centre embedding. While placing one clause right in the middle of another works fine once, as in (7), applying the same rule a second time over produces an incomprehensible mess, like in (6) above or (8) below.

     (8)    The dog that the cat that the man bought scratched ran away

Even though we can diagram these sentences out and force ourselves to follow the plot from one branch to the next with a whole lot of effort, they don’t really sit well when we hear them out loud. And this seems to suggest an upper limit on how many times our rules can apply. Except, we can find cases where this upper limit goes right out the window, like this 3-clause deep sentence!

     (9)    The reporter who everyone (that) I met trusts said the president won’t resign yet

With a quantifying expression and a pronoun in place of two more definite noun phrases, everything seems to be back in working order. And even more complex sentences than this can be found in writing, though they’re fairly rare.

So, what’s going on here? And how can we account for all this seemingly contradictory data? To start, it’s worth considering one of the most cited papers in all of psychology, The Magical Number Seven, Plus or Minus Two by George A. Miller. This work became famous not for putting an upper limit on how many times some rule or other could apply, but on how much information we’re able to hold in working memory. Like the title says, there seems to be a fairly low ceiling on how many ‘bits’ or ‘chunks’ we can actively hold in our heads at any given time. And this applies to processing language as much as anything else.

In fact, in a pair of papers co-written with linguist Noam Chomsky the following decade, Miller hypothesized that our trouble with centre embedding has more to do with limitations on our memory than on our grammar — which could account for why fiddling with the details (e.g. swapping certain kinds of nouns for others) can sometimes get around the problem.

But then, what exactly is going wrong in the sentence in (6), and more importantly, why should something meaningless like (3) get a free pass? Well, research into how we handle these sentences is very much active, but at least one recent theory takes some steps towards shedding a little light on the contrast.

As we encounter each new noun phrase in a sentence like (3), an expectation is set up that we’ll reach the end of that clause; in other words, we anticipate that we’ll encounter a matching verb phrase for each one. But something starts going wrong when we get to the lowest, most deeply embedded part of the sentence (i.e., “the clinic”).

When we hear that first verb phrase “had hired,” it slots into the lowest open position pretty easily, because that lowest and most recently encountered clause is the current focus of our attention. But when we encounter that second verb phrase, “met Jack,” we’re left at a disadvantage: we’ve got two more open positions to fill, but each one completes a clause that’s been interrupted by another one, having had the focus of attention wrenched away from it. What’s worse, all the clauses are syntactically identical, with nothing to differentiate between them. And, so, the little working memory we have is overloaded. We default to connecting that second verb phrase to the first, highest clause, and mistakenly assume we’ve finished building the sentence.

This Interference Account supposes that the two remaining incomplete clauses that were interrupted by an intervening relative clause compete and interfere with each other, overwhelming our limited memory and forcing us into making the wrong choice. It also explains why we have so much difficulty with centre embedding more generally: since we default to connecting that second verb phrase up to the highest clause, believing we’ve completed the sentence as a whole, encountering a third verb phrase in a sentence like (6) or (8) completely violates our expectations, and throws us for a loop.

So, the existence of acceptable nonsense like (3) and of well-formed but incomprehensible sentences like (6) doesn’t mean that our grammar is broken beyond repair; it just means that the rules that make up language are owned and operated by less-than-perfect users!

In our episode on the linguistics of propaganda, we talked a lot about how false implicatures can bend the truth just enough to sneak misconceptions into people’s heads, without them even necessarily realizing it. These are sentences where we imply something that isn’t true, without coming out and saying it overtly. But while we’ve touched on the topic of indirect speech before, we haven’t spent much time talking about why we do it. That is, why don’t we always just say what we mean, instead of risking a garbled message?

To get at an answer, let’s consider a few different uses we’ve got for indirect speech, and then see if we can figure out what they’ve got in common. Imagine, first, that you were out on a date, and as the evening winds down, you want things to move in a more romantic direction. Would you come right out and say it? Well, some of us might. But chances are that many would take a much more gentle approach — say, by asking if the other party wanted to come over to their place for coffee, or maybe to Netflix and chill.

Or let’s say you were driving a bit too fast, got pulled over, and were pretty sure you were about to get a ticket for a few hundred dollars that you really can’t afford. But let’s say you happened to have $50 on hand, and you’re feeling just brave enough to give a bit of bribery a go (NB: The Ling Space does not condone bribery). Would you move right to “I’ll give you money if you let me go”? Probably not, if you have any intention of staying out of jail. You’d likely try to be at least a little sly about it — maybe wondering aloud if the problem can’t be “taken care of here.”

Or picture the old cliché of a mobster extorting protection money from some local business, under penalty of violence. Since explicit threats are often illegal, but the enforcer still needs to get their message across, euphemistic speech ends up a vital part of their criminal enterprise. Phrases like “It’d be a shame if something happened to this fine establishment” replace outright intimidation, though the message remains the same.

In each of these cases, the speaker is affording themselves plausible deniability. Trying to move a new relationship (or even an old one) in a different direction can be potentially awkward, especially if the other party isn’t as interested as you. But if you play your cards close enough to the chest, and things go awry, you can always deny you were talking about anything more than coffee, or a night spent binge-watching the latest season of House of Cards.

And since bribing an officer is against the law, but might get you out of paying a hefty fine if they happen not to be the most honest cop in the land, the indirect approach lets you test the waters without committing yourself one way or the other. If they catch your drift, everybody leaves happy; if they don’t, well, you can hardly be found guilty for someone else misunderstanding your otherwise unimpeachable character! (More generally, shifting from one relationship type to another, like from one rooted in dominance to one that’s more transactional, can lead to tension, which is why bribing the maitre d’ for a better table can seem just as nerve-wracking, even if it’s not a crime.)

As for that threat: it might be hard getting something so weaselly to stick in court. On the face of it, after all, it really would be a shame if something happened! And they can always claim they were just expressing genuine concern, as laughable as that might seem.

And, so, indirect speech — and by extension plausible deniability — has many uses, both amongst those in positions of power, and those with none. Though paradoxical on the face of it, it can provide avenues for authoritarians to obtain and maintain control,* while protecting the powerless when all other exits are blocked.**

It’s fair to ask, though, why bribes and threats and the like that are so thinly veiled should work at all. Doesn’t everybody know what ‘Netflix and chill’ means by now? And is the mob really fooling anyone with their supposed concern for the well-being of the community? The secret lies in a concept we’ve spent some timepicking apartalready:mutual knowledge, otherwise known as common ground.

Mutual knowledge refers to the knowledge that exists between two or more speakers — not simply what both of them know, but what each of them knows the other one knows (and what each of them knows the other one knows the other knows, and so on). So while the intent of asking a partner over for coffee might seem obvious to both parties involved, because the invitation was indirect, there’s enough mutual doubt should either one decide to back out. If the answer is “no thanks,” embarrassment is saved, and everyone can go along pretending nothing ever happened. The possibility that either speaker doesn’t understand what just took place is small, but when we start asking whether each of them knows whether the other knows what happened, or knows that they know that they know, uncertainties multiply unbounded.

What indirect speech really does, then, is keep things off the record. While the information implicated by someone might be clear as day to anyone within earshot, that information manages not to work its way into the common ground. And, so, unlike base assertions, which fall square into the vessel of mutual knowledge we carry between us in any given conversation, implicatures float around just out of our reach — visible to everyone, but ephemeral enough for us to pretend they don’t even exist, if and when we need to.

practicingtheliberalarts:

thelingspace:

Inour most recent episode, we talked about modals: words like “can”, “may”, “must”, and more. In particular, we took a deep dive into the semanticsof modal verbs. But we didn’t talk much about how they fit into the structures of sentences, and this seems to leave open some important questions. For starters, we made the claim that — in terms of their meanings — modal verbs combine with whole sentences, and not just the verb phrase that follows them. After all, the meaning of the sentence in (1a) seems to correspond to (1b). 

     (1a)    The Observers must report to their commander.    

     (1b)    It must be that the Observers report to their commander.

On the face of it, it seems weird that subjects in modal sentences appear separate from the main verb phrase, as in (1a), while being interpreted as though they were right next to them, as in (1b). It looks like this could be a big problem for our overall theory.

Thankfully, when we take into account some of the important discoveries we’ve talked about in past episodes, like the VP-Internal Subject Hypothesis, this problem goes away pretty quick. If it’s true that, in general, subjects start off somewhere insidethe verb phrase, and only latermove to a spot that’s higher up (and more to the left, at least in English), we can suppose that the meanings of sentences — modal and all — are simply computed beforethe subject starts moving around, instead of after.

But, this still leaves us wondering what part of the tree modal verbs typically call home. If you want to know more about how to set that up, keep reading!

Weiterlesen

The liminal zone between syntax and semantics (especially when we’re drawing mid-twentieth century trees) makes me completely bonkers. I don’t see the point of doing this kind of structural, ideologically-coded work when what’s at issue are conceptual relationships.

I mean, I guess I understand that it would be nice to represent relationships of meaning systematically and in terms of order, but I feel like the various models still don’t account for (any maybe shouldn’t) all the nuance of which human language is capable.

I’m always so much more comfortable using syntax to approach grammars and glosses rather than as a generative and constitutive system of language.

Please correct me if I’m wrong. I would love to hear some arguments for the coordination of semantics and syntax in this way. Or an explanation of how theoretical syntax improves our (english-speaking) understanding of semantics. OK, go!

So this is actually a really deep and interesting question that gets to the heart of linguistic theorizing, and even science in general. We’re going to try to do this without going through all the examples from the episode and the original post, but I hope it still makes sense!

Let’s start with what we said in our post about the apparent tension between modal verbs combining with whole sentences on a conceptual level, while being sandwiched in between subjects and verb phrases on a syntactic one. We could, in principle, rework the meaning we’ve assigned to modal verbs to make them work without the added complication of subjects moving around inside sentences. That is, we could paint a superficially simpler picture of how the meanings of these sentences come about. But removing the complexity from one part of a theory often ends up hiding it in some other part of that same theory. And so, even though we could tweak our definition of modal verbs to work in a syntactically simpler system (i.e., one that very literally has fewer moving parts), that new definition is made more complex. To get a bit technical, it goes from a simple characteristic function to a somewhat fancier function-valued function, which is a kind of higher-order function (representing a kind of logic that’s a step above the ordinary).

It turns out we can’t have both a simple syntactic representation, and a simple semantic one, because the order that one thing happens in influences the order that another thing happens in, and ignoring this would be like trying to put your pants on after putting on your shoes. So, the theory as a whole becomes simpler in one way, but more complicated in another. It’s this kind of unavoidable choice that has to be made, where to put the complexity, and we made it the way we made it because we already have other reasons to think it’s actually the syntax that’s complicated in this case, and not the semantics.

On supposing that there are actually two different spots in any given syntactic tree where modal verbs might live, depending on their flavour: this is actually practically useful. We guessed that knowledge-based modal verbs are relatively high up in the structure of any given sentence, while rule-based modal verbs are much lower down. To put it another way, we supposed that some modal verbs are structurally closer to the subject, with lots of room down below, while others are structurally closer to the verb phrase, making things more cramped. And this actually ends up having observable consequences, since in the case of higher, knowledge-flavoured modal verbs, there’s room enough between them and the verb phrase for all kinds of auxiliary verbs, like in “Peter must have been telling the truth.” But with lower modal verbs, that are about rules instead of knowledge, and which don’t leave as much space between themselves and the verb phrase, we don’t end up seeing all these intervening auxiliary verbs, as in “Peter must tell the truth.” So it looks like there’s some reality to there being a couple of different places for modal verbs to hang out, which has a real effect on the kinds of sentences we can build. And it’s worth mentioning at this point that we do find sentences with two modal expressions in them, and that when we do, the knowledge-y modal always seems to come before the more rule-ish one, like in “Peter must have to tell the truth.” We might be able to explain all these facts using simpler structures, but chances are that the meanings that come out of it get pretty sophisticated pretty quickly.

Finally, let’s look at what we said about the internal syntax of modal phrases. On one level, it’s just useful to represent things this way, to be able to visualize all the pieces and parts that go into building the meanings of these sorts of sentences. But in principle, we could simplify both the definitions of modal verbs and their surrounding structures, without any obvious trade-offs. And some linguists actually go ahead and do this. However, there are reasons to think that (for example) the modal base is on some level really present inside the trees we build in our heads. For one, sometimes the modal base is actually pronounced, as the topic of the sentence: “In view of the rules, Peter must tell the truth.” But even when it’s left out, because the context makes it obvious what’s being talked about, there are still reasons to suppose that, at a minimum, there’s something there in the tree that corresponds to it. In particular, when we get to talking about conditional (if-then) sentences like “If he wants to stay out of trouble, Peter must tell the truth,” it’s useful to think of that “if” part of the sentence as restricting, or limiting, the modal base. And that’s much easier to do if the modal base is, although unpronounced, somehow actually there in the tree. It could probably be done other ways, but that likely means supposing there are a couple different kinds of modal verbs – some that fit comfortably into conditional sentences, and some that don’t – and I’m not sure there’s any good reason to think that we have twice as many modal verbs in our vocabulary as we think we do!

And we definitely take the point that, as we’ve presented things, we don’t even come close to fully capturing all the variety and nuance found in human language. Unfortunately, there just isn’t enough space to cover everything in one go. But we’ll for sure be back to talk more about modal verbs in the future, and what we find when we look at other languages. It’s a gigantic and fascinating topic, with many different dimensions to it! And with the basics out of the way, we can now start to get to the good stuff! ^_^

Inour most recent episode, we talked about modals: words like “can”, “may”, “must”, and more. In particular, we took a deep dive into the semanticsof modal verbs. But we didn’t talk much about how they fit into the structures of sentences, and this seems to leave open some important questions. For starters, we made the claim that — in terms of their meanings — modal verbs combine with whole sentences, and not just the verb phrase that follows them. After all, the meaning of the sentence in (1a) seems to correspond to (1b). 

     (1a)    The Observers must report to their commander.    

     (1b)    It must be that the Observers report to their commander.

On the face of it, it seems weird that subjects in modal sentences appear separate from the main verb phrase, as in (1a), while being interpreted as though they were right next to them, as in (1b). It looks like this could be a big problem for our overall theory.

Thankfully, when we take into account some of the important discoveries we’ve talked about in past episodes, like the VP-Internal Subject Hypothesis, this problem goes away pretty quick. If it’s true that, in general, subjects start off somewhere insidethe verb phrase, and only latermove to a spot that’s higher up (and more to the left, at least in English), we can suppose that the meanings of sentences — modal and all — are simply computed beforethe subject starts moving around, instead of after.

But, this still leaves us wondering what part of the tree modal verbs typically call home. If you want to know more about how to set that up, keep reading!

In our old system, we might’ve been happy dumping all our modal verbs into the bucket labelled “inflection.” And this seems reasonable at first, since like other kinds of inflection (e.g., tense, aspect, voice), modality sometimes appears as an affix on the verb.

    (2)    Turkish:    gel-me-meli-siniz

                              come-ɴᴇɢ-ᴏʙʟɪɢ-2ᴘʟ

                              ‘You ought not to come.’

But this probably isn’t the most sophisticated picture of how sentences get put together, given that concepts like tenseandmodalityare pretty different from each other. Moreover, if they really were members of the exact same category, you might not expect to find them showing up in the same place at the same time, like how a coin can’t land both heads and tails; in fact, most modal verbs have both present tense (may, can, shall, will) and past tense (might, could, should, would) forms.

So, it probably makes more sense to think of modal verbs as appearing either somewhere just below, or somewhere just above, a dedicated tense phrase (TP).

It might even be both, given that modal verbs seem to interact with tense in subtly different ways, depending on their flavour:

    (3a)    Olivia could have used her powers, but she didn’t want to.

    (3b)    Olivia could have used her powers, but I haven’t found out yet.

After all, the sentence in (3a), which is saying something about Olivia’s abilities, seems to be about what was possible in the past(i.e., circumstances might be different now), whereas the sentence in (3b) says something about the speaker’s presentstate of knowledge. In other words, it looks as if the same modal verb either falls under the influence of the past tense, or else manages to escape it, depending on how it’s interpreted. This suggests there might be modal phrases both below and above the tense phrase.

Lastly, it’s important to say something about how modal phrases actually get their meanings, and what that says about their internal structure. In the episode, we gave the modal verb “must” a meaning that looked like this:

    (4)    “must” = λ B . λ p . B ⊆ p

Reading it from left to right, it says that a modal verb first combines with some contextually defined modal base B — whose job it is to give the flavour, like whether it’s about belief or ability — and then goes on to combine with the sentence p, saying of the two that the set of worlds described by the base is a subset of the set of worlds described by the sentence. The modal base, then, is kind of like a pronoun; it’s like “they” or “them,” since it picks up its meaning from the context of the conversation. Whether “must” says something about someone’s knowledge, or about some set of rules to be followed, depends on the content of B. And if all this is right, it means that the structures of modal phrases — at a minimum — look something like this:

In reality, a bit more needs to be said about this, since a modal base by itself (as we’ve been thinking of it) can’t completely determine the meaning of the sentence. That’s because all the context can really provide is a general description of what’s being talked about — like whether what’s being discussed includes beliefs, rules, goals, abilities, et cetera. But, it can’t take the extra step of supplyingthose beliefs or rules or goals. To get a sense of why this makes a difference, take a look at the following sentence.

    (5)    I must report to the Colonel.

Imagine that this sentence is said by someone who mistakenly believes that the Colonel asked to see them. In this case, unbeknownst to the speaker, the sentence is false. And the reasonit’s false is because of what the actual requirements are, and not simply what the speaker might suppose they are. So, whether or not this kind of sentence ends up true or false depends on the way the world actually is, and not the way the speaker thinksit is. In other words, speakers can be uninformed about the content of the modal base, in a way that can’t be handled by context alone; that is, they can be mistaken about what the relevant rules or beliefs really consist of. And all of this means that the modal base must really be more like a function that’s meant to take some world as its input (the world in which the sentence is spoken), and then produce a set of worlds out of that, for the modal verb to work with — one that accurately captures whatever’s being talked about.

So, a more complete picture of what the internal structure of a modal phrase looks like is this:

What we see here is a function (sometimes called an accessibility relation) combining with a special kind of world pronoun. This relation R is what’s actually provided by the context of the conversation; it’s what determines the overall flavour. That w* symbol is what does the job of filling in the content of whatever’s being talked about, by providing the full set of rules or abilities under discussion. And it’s together that they determine the modal base — the set of worlds that modals like “must” or “may” go on to compare with the set of worlds described by the rest of the sentence. This way, we fully account for the fact that the truth of modal sentences depends not just on the context of the conversation, but on the way the world is.

As a final point, it’s worth mentioning that no matter which kind of structure we choose to represent modal sentences, none of them ever quite match up with how the sentence is actually pronounced. That is, the Logical Form of the sentence (how it’s interpreted), at least in these cases, is reliably different from its Phonetic Form (how it’s said). In fact, as it’s turned out, the idea that there are mismatches between how sentences are spoken and what they mean has played a big role in modern syntactic theory — one we’ll continue to talk about in the future!

Inour episode on relative clauses, we focused on restrictive relative clauses, which we argued behave a lot like adjectives — specifically, intersective adjectives like “broken” and “curly.” In other words, if we think of both relative clauses and the nouns they combine with as sets, then the meanings we get out of connecting them together are whatever those sets share in common, or their intersection.

    (1)    “the broken step” ≈ “the step that’s broken”

And it makes sense to suppose that these kinds of relative clauses typically sit inside noun phrases, somewhere that’s neither too far up nor too far down the tree. After all, we can freely separate a relative clause from its host noun with a prepositional phrase, as in “the step in the middle of the staircase that’s broken,” suggesting the relative clause isn’t all that close to it to begin with. At the same time, the definite article seems to have a significant influence over everything that follows it, since out of all the steps and all the things that are broken, “the” picks out the one thing that’s both; this suggests that the relative clause is somewhere below the determiner.

image

But this isn’t the only kind of relative clause we find. If you want some more kinds of relative clauses, check below:

Consider the following sentence:

    (2)    Luke’s dad, who used to be a cheerleader, is also quite the prankster.

It doesn’t look like the relative clause “who used to be a cheerleader” is really restricting the meaning of “Luke’s dad.” In fact, if anything, it seems to be adding to it! In this case, we have what’s called a non-restrictive relative clause, which is often fenced off from the rest of the sentence with a slight pause (represented with commas).* And while restrictive relative clauses are generally thought of as combining meaningfully with the words around them, many linguists view non-restrictive relative clauses as being semantically separate from the rest of the sentence, as a kind of afterthought. The meaning of the sentence above, then, could be paraphrased like this:

    (3)    Luke’s dad is quite the prankster. Note that he used to be a cheerleader.

And non-restrictive relative clauses seem to be free of the influence of determiners. A sentence like “The President of France, who was recently elected to office, is relatively young” doesn’t seem to imply that there are a whole bunch of French Presidents, and that only one of them was recently elected; instead, the relative clause applies to everything that came before it — the whole of “the President of France,” including the definite article. So, we get a different picture of what these sorts of structures probably look like, with the non-restrictive relative clause attaching to some node sitting above the determiner phrase.

image

Finally, we see another brand of embedded clause that, on the surface, looks an awful lot like what we’ve seen already. Consider the following noun phrase:

    (4)    Phil’s worry that Luke won’t get into college

Notice, though, that unlike the relative clauses we’ve been seeing up until now, this one doesn’t have any gap in it; that is, it can stand on its own as a complete sentence.

    (5)    Luke won’t get into college.

These kinds of embedded clauses, which provide content to the nouns they modify, are known as complement clauses, because they look so much like the complements of verbs like “worry” and “claim” — found in full sentences like the one below.

    (6)    Phil’s worried that Luke won’t get into college.

And given their name, it seems reasonable to suppose they occupy a spot inside noun phrases that’s similar to the complements of these kinds of verbs, cozying up right next to the head of the phrase.

image

However, it turns out that this can’t quite be true. Not only are complement clauses able to follow restrictive relative clauses, it actually sounds kind of weird when they don’t! (And this is a pattern that holds across languages.)

    (7a)    I don’t believe the rumour that I heard this morning that he’ll move.

    (7b)    ??I don’t believe the rumour that he’ll move that I heard this morning.

Complement clauses even seem to work well after non-restrictive relative clauses.

    (8)    There’s a rumour, which I heard this morning, that he’ll be moving.

So complement clauses end up looking a lot like more some species of non-restrictive relative clauses than anything else (including real complements), meaning they probably connect up to a part of the tree that’s higher up than your average relative clause.

It really does take all kinds!

*Some insist on another difference between restrictive and non-restrictive relative clauses: that while either one may use relative pronouns like “who” or “where,” only restrictive relative clauses may use “that,” while only non-restrictive relative clauses may use “which.” But there are plenty of counter-examples to this supposed rule, as in Nietzsche’s famous quote “That which does not kill us makes us stronger.” And though non-restrictive relative clauses using “that” are very much an endangered species, they can still be spotted in the wild from time to time.

The subject of an English sentence sure looks like it starts off right at the top of the sentence. “Dan should play the lead” definitely seems to have Dan first. And yet, we’ve argued in past episodes that the participants in the activity that the verb represents start off inside the verb phrase. This helps explain why some languages seem to have both their subjects and objects showing up so low inside any given sentence. For example, in Welsh, the subject shows up between the verb and the object, suggesting it could be somewhere in the VP.

    (1)    Gwelodd        Siôn  ddraig

            saw-3SGPST John dragon

            ‘John saw a dragon’

In English, the subject doesn’t stay down there; instead, it moves up into a higher part of the structure. Nevertheless, we can see that it sometimes leaves a bit of itself behind.

    (2a)    All the patients will [VP get better]

    (2b)    The patients will [VP all get better]

Now we’ve claimed in our most recent episode that, for both syntactic and semantic reasons, we shouldn’t visualize verb phrases as the monolith shown below, on the left. Instead, it’s better to think of the verb phrase as being split, into an upper part and a lower part. The lower half — the meat of the VP — contains the verb itself, and any objects that it needs to feel complete. The upper half consists of a kind of second verb phrase; it ends up encompassing the main one, not unlike how auxiliary and modal VPs sometimes do (e.g., “might have been switched”). And this new verbal region includes both the subject and, in languages with more well-rounded morphology, a special kind of affix that ties everything together.

But if you want to add in this structure, then you should probably expect to find some sentences without it, too. And we do!

So what does a sentence look like without this subject-introducing layer? Well, we should look for sentences that are short an agent— a doer to set the events described by the VP in motion. And when we search through our speech, it isn’t too long before we find just that.

One of the most familiar kinds of sentences that show up without an agent are passivesentences. With their familiar form of “be” and optional ‘by-phrase,’ their most notable feature is that they put their objects in subject position.

    (3a)    They took David’s sister.

    (3b)    David’s sister was taken.

And in our recent discussion on the relationship between active sentences and passive ones, we mentioned Burzio’s Generalization, which makes the observation that there’s a one-to-one connection between sentences that have no agent, and sentences which have no phrases marked with accusative case (e.g., the object pronoun “them” versus the subject “they”). We suggested this had something to do with the passive “be” and its accompanying affix “-en.” But when we get to chopping the verb phrase in two, we can develop a much more satisfying picture. To see how, let’s focus in on what the split really does.

In the episode, we said that one thing this new structure did was give ditransitive verbs like “put” some breathing room inside the lower VP, so that all objects — direct and indirect alike — could live in harmony.

Something else this does, though, is put the main object — which is something that normally shows up with accusative case (e.g., “put himin there”) — right beside our new, little ‘v’ and its agent-adding powers. And this puts us just a step away from understanding what’s going on: if we imagine for a moment that our little ‘v’ is in charge of adding agents and handing out accusative case to whatever’s right next to it, then we can ask what happens when it disappears.

Well, without little ‘v”, the agent for the sentence goes missing, and so does the ability to slap accusative case on anything! Two entirely separate things that shouldn’t really have anything to do with each other unexpectedly synch up, and we finally can make sense of Burzio’s Generalization. With that little ‘v’ pulling double duty, agents and accusatives actually go hand-in-hand.

Of course, passives aren’t the only sentences prone to pairing up with more object-like subjects. On top of alternating verbs like “roll” and “melt,” which we first used to introduce the idea of dividing things up, we find that some verbs alwaysarrive agent-free. In “He deteriorated,” “he” plays no real role in what’s happening, and if you try to use an active subject, it just sounds weird.

    (4a)    He deteriorated.

    (4b)    *They deteriorated him.

And raising verbs like “seem” and “appear” work along those same lines; they don’t come with subjects of their own, and sometimes even resort to stealing one from the following clause. Splitting the verb phrase now just means that these verbs come without an upper vP.

    (5a)    It seems he’s psychic.

    (5b)    He seems to be psychic.

Even straightforwardly transitive verbs like “eat” and “sell” can drop their agents under the right circumstances, showing off what’s sometimes called the ‘middle voice.’

    (6)    This idea won’t sell easily.

Expressions like these have even found their way into the slogans of pop culture!

So passive verbs, certain intransitives, and even ordinary transitives can, and sometimes must, appear without this outer layer, fitting snugly into our new paradigm. And some languages even flag when this new structure of ours shows up, in spite of its often stealthy character. Italian uses a separate set of auxiliary verbs when in the presence of a vP (“ha” for “have”), as opposed to when not (“è” for “be”). And a similar verbal scheme can be seen at work in German.

    (7a)    Maria ha telefonato

              ‘Maria has telephoned’

    (7b)    Maria è arrivata

              ‘Maria is arrived’

    (8a)    Die Maria hat telefoniert

              ‘Maria has telephoned’

    (8b)    Die Maria ist angekommen

              ‘Maria is arrived’

Thus, the structures underlying all languages emerge little by little, not only by way of English, but through the lens of languages arranged in ways that pull apart and shine light on these otherwise guarded secrets!

loading