Saturday, November 12, 2011

Test-driven parsing

I use Test-Driven Development for my natural language parser, in a very orthodox way. And now that I do it, I see that what I had before is just test-first development, not quite test-driven.

I try to do something new in my parser every day. But very often in the evening I see that I won't manage to get the tests pass today. But, if I insert a little hack here and there, then maybe... And, if mocking really helps, I do it, but not without consequences. I add another test, that doesn't pass because of the mocking. Usually I just take the sentence and add a small variation to it and, of course, to its translation (e.g. I can change a tense, a noun person, the word order). After that I may go to sleep untroubled: the new test belongs to the next day, and I have plenty of time to think how to parse in a more generic way. If this time is not sufficient, I can mock further and create one more test. After a series of those variation tests, I get a satisfactory algorithm and quite a coverage. That's how it goes.

This mocking-all-the-time approach seems to be quite a fruitful way to get the things done. Now I really understand those agile guys saying "keep it simple and never change anything without a test". I never can think of all the possibilities, and there's no right way to do things yet. If I implement a nice beautiful logic based on just one sentence, then the next sentence will inevitably prove this code wrong, or too inflexible, or just plain inconvenient. But when I mock things several times, the mocking logic tends to get more and more generic and finally doesn't look like mocking at all, while still being flexible and convenient. Many mocks have already been transformed into serious logic, many others still wait for it.

By the way, it's not about unit-testing at all. Of 175 tests I have right now only 4 are unit tests. They test the longest common subsequence algorithm that I need for ellipsis and which I had to implement myself. There are also 17 parsing tests which check the internal semantic representation produced after parsing. They are useful to keep an eye on the semantics, but their expected output also tends to change insignificantly quite often. Then it's just tedious to update the tests. So I prefer to write translation tests, because their expectations are a golden standard. I could completely change the architecture of the parser without ever touching them.

Tuesday, November 8, 2011

NLP using graphs, actually


The reason for complete rewrite was simple:

Это  отвлекло   нас от   нашего спора.
That distracted us  from our    argument.

Every word makes a contribution to the parsing state by saying that it participates in a number of constructions, and provides some attributes for each of them. So a contribution consists of pairs of constructions and attribute sets. I'll call such pairs mites.

That (это) is capable to add noun attributes to nom(inative) and acc(usative) constructions, and these mites contradict each other. Distracted (отвлекло) is a transitive verb, therefore it defines head for nom and acc constructions. But these two mites don't contradict each other, they're both very welcome.

Earlier, I unified mites as soon as they came, and I only kept the results of the unification, not the mites themselves. After the word that, there were nom and acc constructions with noun defined, and nom was (randomly) chosen as active. Active constructions had higher priority, and therefore the verb's nom mite was integrated first. Two nom mites were unified, and the nom construction with both head and noun defined linked the values of these attributes semantically. That's all great. But what to do with acc?

You can't just unify it as well, because there's already a nom construction and it contradicts acc in its first mite. So you have to drop either of the mites completely and leave the other. In this sentence, it made sense to drop the acc.noun mite — the accusative alternative of that. The acc.head mite would survive and unify nicely with the real object that comes next — us. But that's a hack and that didn't work when the first это was in fact accusative. A more right way would be to preserve both acc mites but only mark the second one as active. That clearly required per-mite active status, not per-construction.

So now I don't remove any mites automatically when processing word contributions, they all survive. The matter is, are they active or not. Active mites are guaranteed to be non-contradicting. Only active mites about the same construction are unified and passed to this construction, it then may make some semantic changes based on that information. 

When another word comes, the parser solves a constraint satisfaction task on graphs. Some of the new mites are chosen to be active, the old mites may change their active status to accommodate that. The reanalysis tries to 
  • minimize the number of active constructions
  • to maximize the number of mites in those constructions (more mites mean more information)
  • prefer more recent mites
  • to prefer semantically more plausible variants
Surprisingly, it works, though twice as slow as before. And the algorithm now seems so clean and logical that I can't find anything to remove! But I definitely should optimize, as I just get bored waiting for long 10 seconds until 171 tests pass.