Saturday, July 16, 2011

Ellipsis

One of the great things about constructions is that you can do cool things with them which are not so easy with other parsing techniques. One example is ellipsis, when you may just omit any number of words if they are the same as in the previous clause:

По мнению одних дальше следовало 7, по мнению других — 8
PREP opinion some next went 7 PREP opinion others — 8
In the opinion of some, a 7 went next; but in opinion of others an 8 did

To translate this correctly, several things should be done. The parser should recognize that "" actually stands for went next. The semantic model should somehow incorporate this as well as the meta-knowledge that this went next is actually not specified explicitly (otherwise the generator would produce but in opinion of others an 8 went next). Finally, the generator should be able to replace the full verb phrase with did.

Parser


There's clearly some parallelism here. After processing in opinion of others the parser should already know that this is similar to in the opinion of some. The same happens with 8, which is similar to 7. Now the parser should fill in the dash gap with the missing information taken from the previous clause. This means it should have access to all actions it performed during processing of went next, and it should repeat them.

So, there are two problems. One is how to find similar fragments given that they consist of different words. The other is how to be aware of the earlier activity of the parser and be able to reproduce it in new contexts. The problems are quite tightly connected: it's the earlier activity where you search for something similar. And no surprise that both problems are solved by one means, which is, keeping the change log.

Every change in the parsing state is kept in a log. Luckily, the construction architecture is very suitable for this. Remember, each word tells the parser which constructions it participates in and which of them are mutually incompatible. It also contributes arguments to the constructions: specifying a noun of a nom(inative) construction is completely different from contributing a head (verb). Based on this information, the parser resolves the possible conflicts, determines the winning constructions and executes the code attached to them which makes the changes to the semantic model.

The log consists of the contributions each word has made. And by allowing constructions to have access to this log, the parser can tackle the two problems posed by ellipsis:

1. Similarity

7 and 8 are different words, but the contributions they make are quite similar: they invoke the same constructions (nom, gen, acc, ... — all cases) but provide different arguments to them: one creates a frame whose type is 7, another — the same with 8. So I infer that two contributions are similar if they contribute to the same construction and supply same kinds of arguments.

2. Repetition

The parser knows it's processing ellipsis. It knows where the ellipsis starts (in opinion of others) and ends (8). It has found the similar fragments in the previous clause (in the opinion of some and 7). Now it just has to look at the contribution history, find everything in between those two similar fragments and insert it between in opinion of others and 8. The copying happens upon encountering 8 after "—", and the own contribution of 8 is nicely integrated into the environment set by the copied history (e.g. it completes the noun of nom construction, whose head was set by copied went).

Intermediate representation


OK, now we have two went next situations, but only one of them was fully expressed in Russian. And we should do the same in translation. So we have to mark the copy in a special way. In the semantic model we have no contributions, we have only frames and assignments to their properties. Hence we should mark the effects of the copied contributions, i.e. the assignments they add.

For this I've introduced a notion of scope into the semantic model. A scope is just a frame that plays a special role during parsing. Which special role — depends on the frame and its attributes. Important is that it's activated at one moment and deactivated at another moment, with arbitrarily many assignments made in between. In the semantic log this is written as [X] ... [/X] where X is the scope frame and ... stands for the assignments.

I plan to use scopes for many more things, mostly of meta-linguistic kind: separating paragraphs, reflecting the markup, etc. Using it with ellipsis is simple. You just create a frame, say its type is ellipsis, activate it, do the whole contribution copying process (which surely adds some assignments) and deactivate it. Thus you get the nice clear separation of what's said and what's inferred:

A.type=OPINION
#1.opinion_of=A
B.type=SOME
A.arg1=B
#1.time=PAST
C.type=COME_SCALARLY
C.order=AFTER
C.arg1=D
D.type=7
D.number=true
#1.but=#2
--
A.type=OPINION
#2.opinion_of=A
B.type=OTHERS
A.arg1=B
C.type=ellipsis
[C]
D.type=COME_SCALARLY
D.order=AFTER
[/C]
D.arg1=E
E.type=8
E.number=true

Generator


Getting the generator to work is not interesting at all. It's just a matter of adding another condition, given that I mock the generation throughout anyway.


After writing all this I've suddenly thought that well, this is great, but possibly unnecessary. Maybe the right way is not to replicate the history but just mark the whole second clause situation as ellipsis, parse only what's in the text, attach it somewhere and assume that the inference engine (mocked) should be responsible for inferring the went next part. But I don't have enough data yet to make a choice.