Saturday, November 12, 2011

Test-driven parsing

I use Test-Driven Development for my natural language parser, in a very orthodox way. And now that I do it, I see that what I had before is just test-first development, not quite test-driven.

I try to do something new in my parser every day. But very often in the evening I see that I won't manage to get the tests pass today. But, if I insert a little hack here and there, then maybe... And, if mocking really helps, I do it, but not without consequences. I add another test, that doesn't pass because of the mocking. Usually I just take the sentence and add a small variation to it and, of course, to its translation (e.g. I can change a tense, a noun person, the word order). After that I may go to sleep untroubled: the new test belongs to the next day, and I have plenty of time to think how to parse in a more generic way. If this time is not sufficient, I can mock further and create one more test. After a series of those variation tests, I get a satisfactory algorithm and quite a coverage. That's how it goes.

This mocking-all-the-time approach seems to be quite a fruitful way to get the things done. Now I really understand those agile guys saying "keep it simple and never change anything without a test". I never can think of all the possibilities, and there's no right way to do things yet. If I implement a nice beautiful logic based on just one sentence, then the next sentence will inevitably prove this code wrong, or too inflexible, or just plain inconvenient. But when I mock things several times, the mocking logic tends to get more and more generic and finally doesn't look like mocking at all, while still being flexible and convenient. Many mocks have already been transformed into serious logic, many others still wait for it.

By the way, it's not about unit-testing at all. Of 175 tests I have right now only 4 are unit tests. They test the longest common subsequence algorithm that I need for ellipsis and which I had to implement myself. There are also 17 parsing tests which check the internal semantic representation produced after parsing. They are useful to keep an eye on the semantics, but their expected output also tends to change insignificantly quite often. Then it's just tedious to update the tests. So I prefer to write translation tests, because their expectations are a golden standard. I could completely change the architecture of the parser without ever touching them.

Tuesday, November 8, 2011

NLP using graphs, actually


The reason for complete rewrite was simple:

Это  отвлекло   нас от   нашего спора.
That distracted us  from our    argument.

Every word makes a contribution to the parsing state by saying that it participates in a number of constructions, and provides some attributes for each of them. So a contribution consists of pairs of constructions and attribute sets. I'll call such pairs mites.

That (это) is capable to add noun attributes to nom(inative) and acc(usative) constructions, and these mites contradict each other. Distracted (отвлекло) is a transitive verb, therefore it defines head for nom and acc constructions. But these two mites don't contradict each other, they're both very welcome.

Earlier, I unified mites as soon as they came, and I only kept the results of the unification, not the mites themselves. After the word that, there were nom and acc constructions with noun defined, and nom was (randomly) chosen as active. Active constructions had higher priority, and therefore the verb's nom mite was integrated first. Two nom mites were unified, and the nom construction with both head and noun defined linked the values of these attributes semantically. That's all great. But what to do with acc?

You can't just unify it as well, because there's already a nom construction and it contradicts acc in its first mite. So you have to drop either of the mites completely and leave the other. In this sentence, it made sense to drop the acc.noun mite — the accusative alternative of that. The acc.head mite would survive and unify nicely with the real object that comes next — us. But that's a hack and that didn't work when the first это was in fact accusative. A more right way would be to preserve both acc mites but only mark the second one as active. That clearly required per-mite active status, not per-construction.

So now I don't remove any mites automatically when processing word contributions, they all survive. The matter is, are they active or not. Active mites are guaranteed to be non-contradicting. Only active mites about the same construction are unified and passed to this construction, it then may make some semantic changes based on that information. 

When another word comes, the parser solves a constraint satisfaction task on graphs. Some of the new mites are chosen to be active, the old mites may change their active status to accommodate that. The reanalysis tries to 
  • minimize the number of active constructions
  • to maximize the number of mites in those constructions (more mites mean more information)
  • prefer more recent mites
  • to prefer semantically more plausible variants
Surprisingly, it works, though twice as slow as before. And the algorithm now seems so clean and logical that I can't find anything to remove! But I definitely should optimize, as I just get bored waiting for long 10 seconds until 171 tests pass.

Monday, October 31, 2011

Great was their and my amazement!


Let's start with a simple noun phrase: my amazement. My parser analyzes it as follows. The word my creates a frame with type ME and contributes possessor attribute to possessive construction. The word amazement creates a frame with type AMAZE and contributes head attribute to the same possessive construction. Now that both head and possessor attributes are defined, the construction links them with arg1 attribute:

A.type=ME
B.type=AMAZE
B.arg1=A


It's quite similar with their amazement:



A.type=THEY
B.type=AMAZE
B.arg1=A




Now let's parse their and my amazement. The semantics I want to get is this:

A.type=THEY
B.type=ME
C.member1=A
C.conj=and
C.member2=B
D.type=AMAZE
D.arg1=C

I want their, my and amazement to preserve their functionality. This means that after their and my there should be a possessive construction whose possessor points to the conjunction frame C. So it's the word and which should do this, there are no other candidates.

Currently and puts the parser into a special state. In this state, every subsequent contribution is intercepted and analyzed whether it can be a conjunct. A contribution can be the right operand of coordination if there was a similar contribution to the left of it (similar in the same sense as with ellipsis). When a similar contribution is found, the parser takes both left and right contributions and merges them. In this particular case, the left contribution had possessive.possessor=A, the right one  possessive.possessor=B. While merging, a new frame C is introduced, a composite over A and B. Now the possessor attribute of the merged possessive construction points to C instead of A or B. The updated contribution is applied to the parsing state, so that then, when amazement comes, the result will be precisely as wanted. That's it.

This is the core idea. In my parser, conjuncts don't connect constituents or any static structure. They connect contributions, i.e. the changes made to the parsing state. Merging two changes results in one merged change. It's quite powerful and allows for some interesting things like non-constituent coordination. But that's another story.

Tuesday, October 25, 2011

NLP using graphs

Could natural language parsing be a task on graphs? Maybe, at least partly.

Now, during parsing, every word contributes to some constructions. Some of these contributions are mutually exclusive. For example, in Russian a noun typically can't be nominative and accusative at the same time. So this noun's contributions for nom and acc constructions are incompatible.

Different words also may have contradicting contributions. Two nouns in the same clause can't both be accusative even if their forms are compatible with such an alternative (i.e. they both contribute to acc construction).

So here's the idea. Consider a graph whose vertices are formed by each construction contribution for each word. And where those contributions contradict each other in any way, there's an edge. The task is then to find a maximal subset of vertices which are not connected to each other.

There are, of course, other constraints. The resulting graph should make sense from the semantic viewpoint. The subset should be constructed incrementally and conservatively: if the parser can proceed without reanalysis of the already built structures, it should do so. Finally, this graph task has a very limited scope, e.g. it doesn't apply in different clauses.

That's just an idea. Currently my parser doesn't track individual contributions per construction, it just unifies them eagerly. But the test data suggests that might change.

Saturday, July 16, 2011

Ellipsis

One of the great things about constructions is that you can do cool things with them which are not so easy with other parsing techniques. One example is ellipsis, when you may just omit any number of words if they are the same as in the previous clause:

По мнению одних дальше следовало 7, по мнению других — 8
PREP opinion some next went 7 PREP opinion others — 8
In the opinion of some, a 7 went next; but in opinion of others an 8 did

To translate this correctly, several things should be done. The parser should recognize that "" actually stands for went next. The semantic model should somehow incorporate this as well as the meta-knowledge that this went next is actually not specified explicitly (otherwise the generator would produce but in opinion of others an 8 went next). Finally, the generator should be able to replace the full verb phrase with did.

Parser


There's clearly some parallelism here. After processing in opinion of others the parser should already know that this is similar to in the opinion of some. The same happens with 8, which is similar to 7. Now the parser should fill in the dash gap with the missing information taken from the previous clause. This means it should have access to all actions it performed during processing of went next, and it should repeat them.

So, there are two problems. One is how to find similar fragments given that they consist of different words. The other is how to be aware of the earlier activity of the parser and be able to reproduce it in new contexts. The problems are quite tightly connected: it's the earlier activity where you search for something similar. And no surprise that both problems are solved by one means, which is, keeping the change log.

Every change in the parsing state is kept in a log. Luckily, the construction architecture is very suitable for this. Remember, each word tells the parser which constructions it participates in and which of them are mutually incompatible. It also contributes arguments to the constructions: specifying a noun of a nom(inative) construction is completely different from contributing a head (verb). Based on this information, the parser resolves the possible conflicts, determines the winning constructions and executes the code attached to them which makes the changes to the semantic model.

The log consists of the contributions each word has made. And by allowing constructions to have access to this log, the parser can tackle the two problems posed by ellipsis:

1. Similarity

7 and 8 are different words, but the contributions they make are quite similar: they invoke the same constructions (nom, gen, acc, ... — all cases) but provide different arguments to them: one creates a frame whose type is 7, another — the same with 8. So I infer that two contributions are similar if they contribute to the same construction and supply same kinds of arguments.

2. Repetition

The parser knows it's processing ellipsis. It knows where the ellipsis starts (in opinion of others) and ends (8). It has found the similar fragments in the previous clause (in the opinion of some and 7). Now it just has to look at the contribution history, find everything in between those two similar fragments and insert it between in opinion of others and 8. The copying happens upon encountering 8 after "—", and the own contribution of 8 is nicely integrated into the environment set by the copied history (e.g. it completes the noun of nom construction, whose head was set by copied went).

Intermediate representation


OK, now we have two went next situations, but only one of them was fully expressed in Russian. And we should do the same in translation. So we have to mark the copy in a special way. In the semantic model we have no contributions, we have only frames and assignments to their properties. Hence we should mark the effects of the copied contributions, i.e. the assignments they add.

For this I've introduced a notion of scope into the semantic model. A scope is just a frame that plays a special role during parsing. Which special role — depends on the frame and its attributes. Important is that it's activated at one moment and deactivated at another moment, with arbitrarily many assignments made in between. In the semantic log this is written as [X] ... [/X] where X is the scope frame and ... stands for the assignments.

I plan to use scopes for many more things, mostly of meta-linguistic kind: separating paragraphs, reflecting the markup, etc. Using it with ellipsis is simple. You just create a frame, say its type is ellipsis, activate it, do the whole contribution copying process (which surely adds some assignments) and deactivate it. Thus you get the nice clear separation of what's said and what's inferred:

A.type=OPINION
#1.opinion_of=A
B.type=SOME
A.arg1=B
#1.time=PAST
C.type=COME_SCALARLY
C.order=AFTER
C.arg1=D
D.type=7
D.number=true
#1.but=#2
--
A.type=OPINION
#2.opinion_of=A
B.type=OTHERS
A.arg1=B
C.type=ellipsis
[C]
D.type=COME_SCALARLY
D.order=AFTER
[/C]
D.arg1=E
E.type=8
E.number=true

Generator


Getting the generator to work is not interesting at all. It's just a matter of adding another condition, given that I mock the generation throughout anyway.


After writing all this I've suddenly thought that well, this is great, but possibly unnecessary. Maybe the right way is not to replicate the history but just mark the whole second clause situation as ellipsis, parse only what's in the text, attach it somewhere and assume that the inference engine (mocked) should be responsible for inferring the went next part. But I don't have enough data yet to make a choice.

Sunday, May 8, 2011

Lost in translation simulation

That I don't write about my parser doesn't mean I don't write it. I just can't be bothered to write both. Most of the time I prefer coding. And I can't say there's a lack of interesting problems. For example, here's a part of the 12th sentence:

по мнению одних дальше следовало 7, по мнению других - 8
In the opinion of some, a 7 went next; but in opinion of others an 8 did

There are many interesting things here. Just look at this nice little ellipsis (the dash before 8)! But what worries me the most is English. Or just my English. I've found that I don't know it enough to understand why the translation looks as it looks and how I'm supposed generate this translation.

Take the articles for example: a 7 went next; an 8 did. Why on earth are the numbers indefinite? Are there many sevens and is it just one of them? I don't get it. But it's all over the translation. Only in the first sentence 7 or 8 are without articles, in every other place they're indefinite.

Then, compare In the opinion of some with in opinion of others. The main difference is the article again. Is the opinion of some more definite than the opinion of the others? Does it just sound better that way? Of course, I've hacked all that, but I'm still uncomfortable.

And, by the way, just if anyone cares, here it is: lots of hacks and some ideas.

Saturday, April 30, 2011

Disambiguation by constructions

A word may have many senses and a good parser must choose the right one. Which is the right one? This is often obvious from the context. When you see an ambiguous word alone (e.g. cooks), you can tell nothing about its meaning. But if you see it in context, you may start guessing its syntactic and/or semantic behavior (our cooks have prepared the dinner vs. Mary cooks fish). Surrounding words greatly help in determining the part of speech, and many disambiguation algorithms take advantage of that.

But who on earth cares about the parts of speech? Well, many do, for example, parsers, both statistical and declarative, employ this information for building all kinds of structures. But anyway, that's an intermediate thing used for the own convenience of those parsers. For the ultimate text analysis tasks parts of speech are not important at all. The meaning is what is important, not them. So why bother at all? I'm currently trying to live without the intermediate part-of-speech level in my parser, and so far it works. How?

Consider cooks again. It can participate in the following constructions:
  1. she cooks
  2. cooks fish
  3. cooks when something happens
  4. cooks well
  5. the cooks
  6. sad cooks
  7. cooks that came from Germany
  8. and so on
Of those, 1-4 are "verbal", they refer to the same meaning of cooks, the process of cooking. And in fact they can all occur together in one sentence: She cooks fish very well when she's not tired. 5-7, on the other hand, are "nominal", they describe the people who cook for a living. They are also mutually compatible and at the same time totally incompatible with 1-4.

Let's now say there are no nouns, verbs and so on, there are only constructions. Upon encountering cooks, the parser notes all the constructions possible with this word (at least 1-7 from above). It also marks some of them (1-4) as incompatible with others (5-7). Then another word comes, for example, fish. It also generates tons of constructions, some of them also mutually incompatible (fish can be a noun or a verb as well). Importantly, one of them is the familiar Transitive (number 2 in the list). It's been suggested by both words, and it clearly wins over the others which were suggested only by one of the two words.

Now the constructions which are incompatible with this Transitive can be pruned: both "nominal" for cooks and "verbal" for fish. And the Transitive is promoted and may now contribute to the meaning of the entire text. (e.g. Cook.patient=Fish). Disambiguation complete.

Positive side: it's very simple and I don't have to create boring data structures for different parts of speech with all those cases, inclinations, numbers, genders, etc. Negative side: every word now has to know of all contexts it can occur in. Both adjectives and nouns have to specify that they participate in Adjective+Noun construction. That's quite unusual in rule-based parsing where people try to hand-code as little redundancy as possible. Anyway, unusual doesn't mean bad, I'm not very much against redundancy, and I really like the overall simplicity.