Many natural language semantic formalisms are dividing the meaning into predicates and their arguments. They may be called different names, but John loves Mary's representation is anyway something like LOVES(John, Mary). Almost everyone seems to agree that this captures this sentence's meaning quite well, though details of representation may vary significantly.
What's wrong with it? Imagine this idea being uttered in different contexts (bold indicates logical stress):
(1) Who loves Mary? John loves Mary.
(2) Whom loves John? John loves Mary.
(3) Does John like Mary? John loves Mary.
(4) Alice loves Bob, John loves Mary.
(5) Why is John so happy? John loves Mary.
(6) John loves Mary. //the very beginning of a text
(7) Do you know who loves Mary? It's John!
After hearing any of these 7 examples the listener knows that LOVES(John, Mary). But does it mean that in each example there's a sentence with that meaning? Actually, only (6) has exactly that meaning, while in the other examples it's split across two clauses in various ways.
A natural definition of sentence semantics would be the difference between listener's knowledge before and after hearing the sentence. In this case, the meanings of John loves Mary are completely different, because we hear this clause with different background knowledge:
(1) We know LOVES(X, Mary). X := John
(2) We know LOVES(John, X). X := Mary
(3) We know X(John, Mary) and even wonder if X=LIKES. But X := LOVES
(4) We know LOVES(X, Y). X := John, Y := Mary.
(5) We know X(John). X := λy LOVES(y, Mary).
(6) We know nothing. LOVES(John, Mary).
(7) We know LOVES(X, Mary). X := John. //same as (1)
We now see 6 very different semantics for just one sentence, pronounced with different intonation. And only (6) is canonical, where we just have no background (although the listener probably knows John and Mary). So it appears that the traditional logical approach describes just the sentences that start a text/discourse. But there are very few of them compared to the number of all the sentences in the language! What's the point of analyzing only a small fraction of the material?
So, to describe a sentence meaning, you should always pay attention to what the reader/listener knew before conceiving it. Otherwise you just can't call it sentence meaning. Isn't that obvious? Fortunately, there are modern dynamic semantics approaches that seem to understand the problem. It's just a pity that for such a long time it wasn't widely appreciated.
2 comments:
The semantics for the 5th sentence seems to be wider then the semantics provided by original sentence. For example, if LOVES(Mike, Mary) is true then we have X(John) is true and X(Mike) is true, but X(Mike) is not the meaning of the sentence.
Not exactly. The background part of the sentence semantics describes what listener must know before hearing it. And he doesn't know X(Mike), at least within the proposed context. For him to know X(Mike) the background should have been like:
- What's with John?
- The same that with Mike.
- Oh, really? But what's it?
- OK, I'll give you a hint. John loves Mary.
But still this is artificial. I'd say that here we know that Y(Mike) where Y=λz LOVES(z, someone).
Post a Comment