Thursday, October 22, 2009

Shallow vs. structural analysis

Let's look at a very simple sentence, namely 'John has two sisters'. I'm now interested in its semantics, or, more precisely, its computer representation. The truth condition is actually very simple, it says that the number of those who happen to be sisters of John equals to 2:

|{ x | SISTER(x, JOHN) }|=2

(let the uppercase letters denote some semantic meaning here).

A question arises, how can we assemble this semantics from meanings of sentence components? The constituent structure for this sentence would be:

[S[NPJohn] [VPhas [QPtwo sisters]]]

The dependency structure:

John <- has -> two -> sisters

The beloved one, applicative structure:

(has John (two sisters))

Lexical Functional Grammar-style:

| PRED 'has'
| SUBJ | PRED 'John'
| OBJ | PRED 'sisters'
| | SPEC | NUM 2

In any of these variants has has two arguments: John and the combined two sisters. So it appears that we should combine the word meanings in this order, getting something like f(HAS, JOHN, g(2, SISTER)). And this formula should somehow be equivalent to |{x | SISTER(x, JOHN)}|=2. The question is, what are f and g? I see no direct structural answer. The best variant I've come to is that we should change the structure, replace it with another one which contains only one predicate:


which would translate to

|{ x | SISTER(x, Who) }|=N

This can be generalized a bit (take sibling instead of sister), but not further. A similar sentence 'John has two dogs' would have a different semantics, e.g. |{x | DOG(x) & BELONGS(x, John)}|=2. A two-place 'sister'-like 'dog' predicate would be funny.

So it seems that all the structures I know of are of no use with this sentence. That's one of the reasons I prefer shallow parsing based on patterns with wildcards: it appears to map better onto semantics. And a probable sad consequence is that the applicative structure, though being so beautiful, will remain unapplied.

No comments: