Tuesday, July 31, 2012

The comma dilemma

There's a sentence to translate, and there's a comma missing there. This doesn't prevent me from understanding the sentence, but my parser fails. In another version of the same text found online the comma is present. It's only missing in the one I've found long ago which I'm trying to translate literally.

So here's the dilemma:

  • Add the comma according to the Russian rules and parse the correct text,   or
  • Remember that I'm trying to simulate humans for whom extra commas are not a big deal, and dive right away into the wonderful world of parsing noisy input without proper understanding how to deal with correct one.


No idea.

UPD. In the end I corrected the commas in the original sentence, and added failing tests to fix later with all kinds of comma omissions.

Sunday, July 22, 2012

Immutable data structures in OOP (Groovy)

In my parser, everything except some minor auxiliary things is immutable, and I cannot express how valuable it is. It helps debugging, testing, backtracking in the algorithm. But there are certain issues with immutability in Groovy (which I use for its syntactic sugar) and other object-oriented languages.

Update

One very often has to copy an object with a subset of its field values replaced, leaving the others as they are. Functional languages normally have special syntax for that; mainstream object-oriented languages normally don't, including Groovy. So I had to resort to an updating function like copyWith variant for Scala. Manually and without any reflection so far. When adding a field, I also have to update the default constructor, the all-fields constructor and the cloning function. Luckily that's not very often, otherwise I'll probably create an AST-transforming annotation that would do this automatically for me.

Collections

Groovy has no built-in persistent collections: immutable but cheap to create an updated copy of. I've found that Java ecosystem is not rich in these kinds of libraries at all, and that's a pity. Scala and Clojure have them, but tuned to those languages and I couldn't be bothered to use them so far. There are also pcollections and Functional Java libraries which don't have everything I need (e.g. a persistent version of LinkedHashMap). So I have either to create something myself, or use the standard Java collections very carefully, or just give up and rewrite everything in another language. So far I'm combining the  first and the second variants.

State flow

I've found is that writing complex logic in instance methods may be quite error-prone. I have ParsingState class with apply method which is invoked when processing each word. It takes some possibly contradicting language constructions that a word may be part of, and decides which to activate, i.e. which parsing alternative to choose. This stage involves trying all the alternatives and evaluating their semantic plausibility. Something like this:


Whenever you change anything (obtain a changed copy of ParsingState), you must reassign it to state variable. Not a big deal, but a bit tiresome. Worse is this: everything except the first line has to be prefixed with state, because you need to operate on the latest possible version of the parsing state. And it's so damn easy to forget this qualifier! The IDE will autocomplete it, the compiler will eat it and most probably the problem won't be noticed for some time.

After several such bugs I'm very inclined to wrap all complex logic in static methods where you can't have unqualified references. There's normally only one possible state qualifier then, and if you have some special case, you add another variable to keep an earlier version of state, and to mix them is not so easy. At the same time, I like calling instance methods. It's very convenient: there's a namespace clearly defined by the qualifier type, it contains only what you need. As a result, I create an instance method which immediately calls some private static one where all the logic lies.

Conclusion

So, here's my wishlist for an ideal language:
  1. Support for creating immutable structures easily. I find Scala way to be the most concise. Groovy offers @TupleConstructor annotation which is nice but not a part of the language.
  2. Support for updating immutable structures so that adding a new field requires changing just one place.
  3. Persistent collections with a predictable iteration order. An ability to create them using concise list/map literals would be a plus.
  4. A way of expressing complex state transition logic concisely, clearly and not so error-prone. Like State monad, only simpler, something for human beings. Maybe just a syntactic sugar for var=f(var).  Computing other values alongside state change as well: var,sideResult=f(var).

Friday, July 20, 2012

Visibility ambiguity

I'm now dealing with a quite interesting kind of ambiguity which I call visibility ambiguity. Consider the following three sentences (I might have put too many commas; imagine it's Russian where they're obligatory):
  1. But there, thinking about her words, we got sad.
  2. But there, thinking about her words, which were so cruel, we got sad.
  3. But there, thinking about her words, intonation and gestures, we got sad.
The sentences are the same until the second comma after which they diverge dramatically. To process the next word the parser should determine which previous state it can attach to. In 1, we is attached to there and the top-level sentence is continued. In 2, which relates to words plus comma, a relative clause begins. In 3, there's a conjunction: words gets merged with intonation and re-attached together to about and her.

Obviously, all three attachment sites should be visible to the parser: there, words and comma. And all three continuation possibilities should be explored.

Which is where it gets complicated, because these attachment sites are also incompatible with each other. Once you've attached a word following route 1, you cannot attach the next one to route 2. The parser should regard such attachments as mutually incompatible. There are at least three diverging routes after every comma, each defined by its own set of constructions. And all constructions from different sets should be marked as contradictory, pair-wise. Which is quite a few pairs.

So it's not only that the number of possible variants will be big. It's also that the number of incompatibility relations to track will be much bigger, something pretty exponential. Given that I normally log all the visible attachment sites after each word during parsing, the log will be hard to analyze.

And what troubles me: people don't do this, they don't track all the possibilities. Instead, they quite quickly choose a variant which suits best, purge everything else from the memory and pursue only this parsing route. If it goes wrong eventually, they perform some kind of reanalysis. But the heuristics are tuned well enough and normally such a need never arises.

Right now my parser maintains all the structures needed for all possible analyses, and switching between them is very simple. But that just doesn't scale well enough to support visibility ambiguities. So it seems that the time has come to teach my parser the human strategy: choosing one route and then reparsing when anything doesn't fit well.

Monday, June 18, 2012

7 stages of exploratory programming


  1. One has an idea on natural language understanding, starts implementing it. Cool, it works!
  2. One encounters some sentences not easy to deal with. It's not clear how to do it the right way, beautifully, so one does it somehow  by dirty hacking. Hopefully, later it'll become more clear when new data arrives.
  3. As more and more sentences sentences are encountered, some hacks are corrected but others are added. Hacks proliferate and get tightly intertwined.
  4. It becomes harder to parse new sentences. Hacks start getting in the way. Fortunately, one starts seeing how to avoid them.
  5. Correcting the design appears to be non-trivial because hacks depend on each other. So to fix one you should first correct another one, which requires third one, and so on. There's a whole dependency graph of hacks!
  6. The test suite doesn't seem a friend anymore. It keeps failing, revealing more and more ways hacks depend on each other. No new sentences get parsed anymore.
  7. After half a year of pure refactoring, the morale is low and still no light at the end of the tunnel. Yet another cyclic hack dependency is found. Tired, one decides to apply all the desired design changes at once and fix the tests one by one, almost as if writing the parser afresh. Goto 1.

Saturday, November 12, 2011

Test-driven parsing

I use Test-Driven Development for my natural language parser, in a very orthodox way. And now that I do it, I see that what I had before is just test-first development, not quite test-driven.

I try to do something new in my parser every day. But very often in the evening I see that I won't manage to get the tests pass today. But, if I insert a little hack here and there, then maybe... And, if mocking really helps, I do it, but not without consequences. I add another test, that doesn't pass because of the mocking. Usually I just take the sentence and add a small variation to it and, of course, to its translation (e.g. I can change a tense, a noun person, the word order). After that I may go to sleep untroubled: the new test belongs to the next day, and I have plenty of time to think how to parse in a more generic way. If this time is not sufficient, I can mock further and create one more test. After a series of those variation tests, I get a satisfactory algorithm and quite a coverage. That's how it goes.

This mocking-all-the-time approach seems to be quite a fruitful way to get the things done. Now I really understand those agile guys saying "keep it simple and never change anything without a test". I never can think of all the possibilities, and there's no right way to do things yet. If I implement a nice beautiful logic based on just one sentence, then the next sentence will inevitably prove this code wrong, or too inflexible, or just plain inconvenient. But when I mock things several times, the mocking logic tends to get more and more generic and finally doesn't look like mocking at all, while still being flexible and convenient. Many mocks have already been transformed into serious logic, many others still wait for it.

By the way, it's not about unit-testing at all. Of 175 tests I have right now only 4 are unit tests. They test the longest common subsequence algorithm that I need for ellipsis and which I had to implement myself. There are also 17 parsing tests which check the internal semantic representation produced after parsing. They are useful to keep an eye on the semantics, but their expected output also tends to change insignificantly quite often. Then it's just tedious to update the tests. So I prefer to write translation tests, because their expectations are a golden standard. I could completely change the architecture of the parser without ever touching them.

Tuesday, November 8, 2011

NLP using graphs, actually


The reason for complete rewrite was simple:

Это  отвлекло   нас от   нашего спора.
That distracted us  from our    argument.

Every word makes a contribution to the parsing state by saying that it participates in a number of constructions, and provides some attributes for each of them. So a contribution consists of pairs of constructions and attribute sets. I'll call such pairs mites.

That (это) is capable to add noun attributes to nom(inative) and acc(usative) constructions, and these mites contradict each other. Distracted (отвлекло) is a transitive verb, therefore it defines head for nom and acc constructions. But these two mites don't contradict each other, they're both very welcome.

Earlier, I unified mites as soon as they came, and I only kept the results of the unification, not the mites themselves. After the word that, there were nom and acc constructions with noun defined, and nom was (randomly) chosen as active. Active constructions had higher priority, and therefore the verb's nom mite was integrated first. Two nom mites were unified, and the nom construction with both head and noun defined linked the values of these attributes semantically. That's all great. But what to do with acc?

You can't just unify it as well, because there's already a nom construction and it contradicts acc in its first mite. So you have to drop either of the mites completely and leave the other. In this sentence, it made sense to drop the acc.noun mite — the accusative alternative of that. The acc.head mite would survive and unify nicely with the real object that comes next — us. But that's a hack and that didn't work when the first это was in fact accusative. A more right way would be to preserve both acc mites but only mark the second one as active. That clearly required per-mite active status, not per-construction.

So now I don't remove any mites automatically when processing word contributions, they all survive. The matter is, are they active or not. Active mites are guaranteed to be non-contradicting. Only active mites about the same construction are unified and passed to this construction, it then may make some semantic changes based on that information. 

When another word comes, the parser solves a constraint satisfaction task on graphs. Some of the new mites are chosen to be active, the old mites may change their active status to accommodate that. The reanalysis tries to 
  • minimize the number of active constructions
  • to maximize the number of mites in those constructions (more mites mean more information)
  • prefer more recent mites
  • to prefer semantically more plausible variants
Surprisingly, it works, though twice as slow as before. And the algorithm now seems so clean and logical that I can't find anything to remove! But I definitely should optimize, as I just get bored waiting for long 10 seconds until 171 tests pass.

Monday, October 31, 2011

Great was their and my amazement!


Let's start with a simple noun phrase: my amazement. My parser analyzes it as follows. The word my creates a frame with type ME and contributes possessor attribute to possessive construction. The word amazement creates a frame with type AMAZE and contributes head attribute to the same possessive construction. Now that both head and possessor attributes are defined, the construction links them with arg1 attribute:

A.type=ME
B.type=AMAZE
B.arg1=A


It's quite similar with their amazement:



A.type=THEY
B.type=AMAZE
B.arg1=A




Now let's parse their and my amazement. The semantics I want to get is this:

A.type=THEY
B.type=ME
C.member1=A
C.conj=and
C.member2=B
D.type=AMAZE
D.arg1=C

I want their, my and amazement to preserve their functionality. This means that after their and my there should be a possessive construction whose possessor points to the conjunction frame C. So it's the word and which should do this, there are no other candidates.

Currently and puts the parser into a special state. In this state, every subsequent contribution is intercepted and analyzed whether it can be a conjunct. A contribution can be the right operand of coordination if there was a similar contribution to the left of it (similar in the same sense as with ellipsis). When a similar contribution is found, the parser takes both left and right contributions and merges them. In this particular case, the left contribution had possessive.possessor=A, the right one  possessive.possessor=B. While merging, a new frame C is introduced, a composite over A and B. Now the possessor attribute of the merged possessive construction points to C instead of A or B. The updated contribution is applied to the parsing state, so that then, when amazement comes, the result will be precisely as wanted. That's it.

This is the core idea. In my parser, conjuncts don't connect constituents or any static structure. They connect contributions, i.e. the changes made to the parsing state. Merging two changes results in one merged change. It's quite powerful and allows for some interesting things like non-constituent coordination. But that's another story.