From the parsing perspective, some words have much more far-reaching consequences than the other ones. Take English articles, for example. They don't only denote the definiteness or indefiniteness, they also give a broad hint that a noun is coming. So, having encountered an article, a smart parser may immediately create a placeholder node for noun (and also specify its definiteness). When the noun comes, instead of creating its own structure it'll just fit into the existing placeholder.
Once an noun placeholder is constructed, a parser may immediately attach it where it's is needed. If there is an ambiguous verb or preposition, attaching that noun as an argument can help choosing the right alternative and thus reduce the working memory consumption.
Hawkins argues that such placeholders play an important role in the syntax. He shows for many languages that their word order is in many cases optimized for such placeholders to be constructed and integrated as early as possible. That helps minimizing the amount of the nodes awaiting integration at any moment in parsing.
In German and Russian, the NP placeholder creators (determiners and adjectives respectively) also carry case markers. Therefore case is known for the placeholders, which also aids faster verb disambiguation (e.g. in+Dative denotes location, while in+Accusative denotes direction in German).
Verb placeholders also seem to exist. For example, in Japanese, where the verb comes at the very end of the sentence, there is no sign that a sequence of the argument noun phrases causes any processing difficulties. I conclude that these noun phrases get integrated into some structure as they appear. When a verb comes, it just quickly iterates through this structure and assigns to the accumulated noun phrases their semantic roles. The rich case marking of Japanese helps the arguments to be stored efficiently and not to interfere with each other while awaiting for the verb. I think that the structure I'm talking about can safely be called a verbal (or clausal) placeholder.
It gets more interesting in other languages. On surface German is a verb-second language. A nice processing explanation would be that there is no verb placeholder, and the first constituent hangs active in the memory until a verb actually appears. So the verb should appear immediately to reduce the memory usage. In most relative clauses, on the other hand, the verb comes at the end, and no processing problems arise even if there are lots of NP/PP dependents before it. Incidentally, all these clauses have to be introduced by a some complementizer. When there is no complementizer, the verb comes second just as if it were a main clause. All this makes me think that the complementizer just creates the verb placeholder which enables the verb to be postponed.
In English, the verb never comes too late, so the need for a placeholder is doubtful. Russian is difficult in this respect, because normally the verb is close to the beginning but it can easily move to the end for focalization purposes. I'm not aware of any experimental data, but I personally don't feel too much overburdened by multiple intermediate NPs and PPs in this case. So I have no idea about when a verbal placeholder is created in Russian: always, never, only in case of too many subsequent NPs and PPs, or only in some specific context (e.g. the one that forces focalization).
No comments:
Post a Comment