Thursday, August 30, 2012

Recursion, caching and Groovy highlighting

It's hard to write anything (e.g. my parser) when it takes 20 seconds for IntelliJ IDEA to highlight a not very large Groovy file. And after you rename anything, you wait the same amount of time while it's optimizing imports. Today I finally decided to investigate the issue. Most of the highlighting time is spent in reference resolution: an IDE must know what every identifier refers to.

Recursion

As it happens, nowadays most performance problems in reference resolution are caused by cyclic dependencies. That means, resolving one reference requires resolving another one, and that one in turn requires the first one. As we use data flow to determine the variable types in Groovy, this is easy to encounter when you have a loop:

def a = new A()
def b = new B()
while (...) {
  a = b.nextA()
  ...
  b = a.nextB()
}

In a = b.nextA() one should first determine the type of b, then find a nextA method on that type and finally use its return type for a. But we should take into account that this might be not a first iteration of the loop, therefore b might be assigned in the very bottom of the loop body, so its type should be taken from there. And there it depends on the type of a defined in the same assignment that we are looking at.

That's just impossible to figure out. So if we discover such a cyclic dependency, we give up on inferring this particular expression type and use other assignments available. In this case a would just have the type A even if b.nextB() actually returns some AImpl.

Technically, it's just a stack overflow prevention. We maintain the list of expressions that we are currently calculating the type of. If we're asked for a type of such an expression, we just return null. The caller is ready for that.

Caching

Things get more complicated because we also cache the type of each expression. The problem is, we shouldn't cache null returned by stack overflow prevention functionality. Moreover, we shouldn't cache anything that depends on that null. Because in the end the type of a will not be null, it'll be A. And anything that depends on it will have a normal type based on A. If we cached an incomplete type, another highlighting thread would come and use it, and highlight a wrong error.

That's why endless recursion prevention and caching should know about each other. In IDEA, we have RecursionManager class which does precisely that.

As a result, if we have lots of cyclic dependencies, we don't cache lots of things. In fact, RecursionManager tries to memoize some calculation results that don't directly lie on a cyclic dependency. And this memoization is what makes this class really complex. It speeds up things quite a bit, but still, the best solution is not to create cyclic dependencies unless one really needs to.

Back to Groovy

So I went commenting out various parts of my Groovy code and putting a breakpoint on return null in RecursionManager. And here's what I found.

There were a couple of plain bugs which also led to cyclic dependencies: incorrect control flow for for-in loops, and too wide search scope when resolving a super class name.

Some Groovy AST transformations (e.g. @TupleConstructor) add constructors with parameters based on the class properties. Property list retrieval requires traversing all class methods and fields. Constructors are also methods in IDEA's terminology, so the transformation handler depends on itself. Fixed in the handler by carefully querying only the physical fields and methods, without running any extenders.

Finally, a new feature in IDEA 12. Each time a variable is passed to a method, IDEA checks the corresponding formal parameter type and narrows the inferred variable type accordingly if needed. Example:

def s = ...
if ("abc".contains(s)) ...
// now IDEA knows that s is a String

Unfortunately to resolve a method one should know the argument types. Moreover, after resolving one should also map arguments to parameters which is quite non-trivial in Groovy given all those default parameter values, named arguments. One should choose a correct overload. If a method is parameterized, then one argument type may be inferred from another. And we are right now calculating and refining the type of one of the arguments. Another cyclic dependency, quite a nasty one.

The best solution I have invented so far is to restrict this feature. Only allow cases when we can unambiguously resolve the method and map arguments to parameters without knowing argument types. This rules out methods with overloads, with default arguments, with generic parameters. Most of the methods I've ever encountered don't have it all anyway.

That's how I spent about 7 hours today. With all those changes (mostly with the last one) it now takes 3 seconds to highlight a file (instead of 20). Still not ideal, but bearable. And I can finally relax, learn some new things about Model Thinking and sleep.

UPD. Continued.

Monday, August 27, 2012

Suddenly filler-gap dependencies

Theoreticians say: in sentences like I know who you saw the deep structure is in fact I know [you saw who]. And in the surface structure there's an invisible gap after saw which is filled by who. Many psycholinguistic studies also seem to confirm that: upon seeing who people start to wait for a right place for it and only settle down after finding it.

Due to Russian free word order, I've had a luxury to ignore this complex thing for a while and treat wh-words as normal verbal arguments, like pronouns. But then two surprises have come.

One surprise was that implementing filler-gap dependencies was the easiest way to resolve a nasty ambiguity. Russian has a word что which can be either a complementizer (я знаю, что ты видел его; I know that you saw him) or a wh-word (я знаю, что ты видел; I know what you saw). The first one is higher in structure than the verb, the second one is lower. This made my parser suffer, it still doesn't like visibility ambiguities very well. Now что is no longer a verbal argument directly, it's a filler and is also higher in the hierarchy, just like in many syntactic theories.

Another surprise was that all this was actually very easy to add in the current parser architecture (given that there's no pied-piping yet). The filler is just a special construction which listens for what the incoming words contribute. If a contribution looks as a right head for the filler's grammatical functions, the contribution is enriched accordingly.

Example: Russian wh-word что can be in nominative or accusative case. For normal nouns that would mean it should generate nom and acc construction mites with noun attribute defined, pointing to a frame with some special wh semantic type. In the filler-gap approach it generates a filler construction instead which then sits and waits until it sees a contribution with nom or acc mites with head attribute defined. E.g. saw as a verb can be a head to both nominative and accusative arguments. The filler construction then adds a nom/acc mite having both head and noun attributes, where the noun points to a frame with wh type, and the head comes from the verb.

So how my parser works in this aspect now is quite similar to human sentence processing: a wh-word creates an active filler that finds a gap when there comes a verb with suitable argument requirements.

Tuesday, July 31, 2012

The comma dilemma

There's a sentence to translate, and there's a comma missing there. This doesn't prevent me from understanding the sentence, but my parser fails. In another version of the same text found online the comma is present. It's only missing in the one I've found long ago which I'm trying to translate literally.

So here's the dilemma:

  • Add the comma according to the Russian rules and parse the correct text,   or
  • Remember that I'm trying to simulate humans for whom extra commas are not a big deal, and dive right away into the wonderful world of parsing noisy input without proper understanding how to deal with correct one.


No idea.

UPD. In the end I corrected the commas in the original sentence, and added failing tests to fix later with all kinds of comma omissions.

Sunday, July 22, 2012

Immutable data structures in OOP (Groovy)

In my parser, everything except some minor auxiliary things is immutable, and I cannot express how valuable it is. It helps debugging, testing, backtracking in the algorithm. But there are certain issues with immutability in Groovy (which I use for its syntactic sugar) and other object-oriented languages.

Update

One very often has to copy an object with a subset of its field values replaced, leaving the others as they are. Functional languages normally have special syntax for that; mainstream object-oriented languages normally don't, including Groovy. So I had to resort to an updating function like copyWith variant for Scala. Manually and without any reflection so far. When adding a field, I also have to update the default constructor, the all-fields constructor and the cloning function. Luckily that's not very often, otherwise I'll probably create an AST-transforming annotation that would do this automatically for me.

Collections

Groovy has no built-in persistent collections: immutable but cheap to create an updated copy of. I've found that Java ecosystem is not rich in these kinds of libraries at all, and that's a pity. Scala and Clojure have them, but tuned to those languages and I couldn't be bothered to use them so far. There are also pcollections and Functional Java libraries which don't have everything I need (e.g. a persistent version of LinkedHashMap). So I have either to create something myself, or use the standard Java collections very carefully, or just give up and rewrite everything in another language. So far I'm combining the  first and the second variants.

State flow

I've found is that writing complex logic in instance methods may be quite error-prone. I have ParsingState class with apply method which is invoked when processing each word. It takes some possibly contradicting language constructions that a word may be part of, and decides which to activate, i.e. which parsing alternative to choose. This stage involves trying all the alternatives and evaluating their semantic plausibility. Something like this:


Whenever you change anything (obtain a changed copy of ParsingState), you must reassign it to state variable. Not a big deal, but a bit tiresome. Worse is this: everything except the first line has to be prefixed with state, because you need to operate on the latest possible version of the parsing state. And it's so damn easy to forget this qualifier! The IDE will autocomplete it, the compiler will eat it and most probably the problem won't be noticed for some time.

After several such bugs I'm very inclined to wrap all complex logic in static methods where you can't have unqualified references. There's normally only one possible state qualifier then, and if you have some special case, you add another variable to keep an earlier version of state, and to mix them is not so easy. At the same time, I like calling instance methods. It's very convenient: there's a namespace clearly defined by the qualifier type, it contains only what you need. As a result, I create an instance method which immediately calls some private static one where all the logic lies.

Conclusion

So, here's my wishlist for an ideal language:
  1. Support for creating immutable structures easily. I find Scala way to be the most concise. Groovy offers @TupleConstructor annotation which is nice but not a part of the language.
  2. Support for updating immutable structures so that adding a new field requires changing just one place.
  3. Persistent collections with a predictable iteration order. An ability to create them using concise list/map literals would be a plus.
  4. A way of expressing complex state transition logic concisely, clearly and not so error-prone. Like State monad, only simpler, something for human beings. Maybe just a syntactic sugar for var=f(var).  Computing other values alongside state change as well: var,sideResult=f(var).

Friday, July 20, 2012

Visibility ambiguity

I'm now dealing with a quite interesting kind of ambiguity which I call visibility ambiguity. Consider the following three sentences (I might have put too many commas; imagine it's Russian where they're obligatory):
  1. But there, thinking about her words, we got sad.
  2. But there, thinking about her words, which were so cruel, we got sad.
  3. But there, thinking about her words, intonation and gestures, we got sad.
The sentences are the same until the second comma after which they diverge dramatically. To process the next word the parser should determine which previous state it can attach to. In 1, we is attached to there and the top-level sentence is continued. In 2, which relates to words plus comma, a relative clause begins. In 3, there's a conjunction: words gets merged with intonation and re-attached together to about and her.

Obviously, all three attachment sites should be visible to the parser: there, words and comma. And all three continuation possibilities should be explored.

Which is where it gets complicated, because these attachment sites are also incompatible with each other. Once you've attached a word following route 1, you cannot attach the next one to route 2. The parser should regard such attachments as mutually incompatible. There are at least three diverging routes after every comma, each defined by its own set of constructions. And all constructions from different sets should be marked as contradictory, pair-wise. Which is quite a few pairs.

So it's not only that the number of possible variants will be big. It's also that the number of incompatibility relations to track will be much bigger, something pretty exponential. Given that I normally log all the visible attachment sites after each word during parsing, the log will be hard to analyze.

And what troubles me: people don't do this, they don't track all the possibilities. Instead, they quite quickly choose a variant which suits best, purge everything else from the memory and pursue only this parsing route. If it goes wrong eventually, they perform some kind of reanalysis. But the heuristics are tuned well enough and normally such a need never arises.

Right now my parser maintains all the structures needed for all possible analyses, and switching between them is very simple. But that just doesn't scale well enough to support visibility ambiguities. So it seems that the time has come to teach my parser the human strategy: choosing one route and then reparsing when anything doesn't fit well.

Monday, June 18, 2012

7 stages of exploratory programming


  1. One has an idea on natural language understanding, starts implementing it. Cool, it works!
  2. One encounters some sentences not easy to deal with. It's not clear how to do it the right way, beautifully, so one does it somehow  by dirty hacking. Hopefully, later it'll become more clear when new data arrives.
  3. As more and more sentences sentences are encountered, some hacks are corrected but others are added. Hacks proliferate and get tightly intertwined.
  4. It becomes harder to parse new sentences. Hacks start getting in the way. Fortunately, one starts seeing how to avoid them.
  5. Correcting the design appears to be non-trivial because hacks depend on each other. So to fix one you should first correct another one, which requires third one, and so on. There's a whole dependency graph of hacks!
  6. The test suite doesn't seem a friend anymore. It keeps failing, revealing more and more ways hacks depend on each other. No new sentences get parsed anymore.
  7. After half a year of pure refactoring, the morale is low and still no light at the end of the tunnel. Yet another cyclic hack dependency is found. Tired, one decides to apply all the desired design changes at once and fix the tests one by one, almost as if writing the parser afresh. Goto 1.

Saturday, November 12, 2011

Test-driven parsing

I use Test-Driven Development for my natural language parser, in a very orthodox way. And now that I do it, I see that what I had before is just test-first development, not quite test-driven.

I try to do something new in my parser every day. But very often in the evening I see that I won't manage to get the tests pass today. But, if I insert a little hack here and there, then maybe... And, if mocking really helps, I do it, but not without consequences. I add another test, that doesn't pass because of the mocking. Usually I just take the sentence and add a small variation to it and, of course, to its translation (e.g. I can change a tense, a noun person, the word order). After that I may go to sleep untroubled: the new test belongs to the next day, and I have plenty of time to think how to parse in a more generic way. If this time is not sufficient, I can mock further and create one more test. After a series of those variation tests, I get a satisfactory algorithm and quite a coverage. That's how it goes.

This mocking-all-the-time approach seems to be quite a fruitful way to get the things done. Now I really understand those agile guys saying "keep it simple and never change anything without a test". I never can think of all the possibilities, and there's no right way to do things yet. If I implement a nice beautiful logic based on just one sentence, then the next sentence will inevitably prove this code wrong, or too inflexible, or just plain inconvenient. But when I mock things several times, the mocking logic tends to get more and more generic and finally doesn't look like mocking at all, while still being flexible and convenient. Many mocks have already been transformed into serious logic, many others still wait for it.

By the way, it's not about unit-testing at all. Of 175 tests I have right now only 4 are unit tests. They test the longest common subsequence algorithm that I need for ellipsis and which I had to implement myself. There are also 17 parsing tests which check the internal semantic representation produced after parsing. They are useful to keep an eye on the semantics, but their expected output also tends to change insignificantly quite often. Then it's just tedious to update the tests. So I prefer to write translation tests, because their expectations are a golden standard. I could completely change the architecture of the parser without ever touching them.