Monday, November 25, 2013

A golden rule of performance optimization

Of course, one shouldn't optimize what doesn't need optimization, so one should use profiler to see what's slow. At least YourKit allows for two kinds of performance profiling: sampling and tracing. The first one reports which code takes most time, and the second one also includes invocation counts for each method. And the rule is:

One should look at both sampling profiles and the ones with invocation counts.

Sampling profiles show you where a program is slow. Tracing profiles show why it's slow. It's quite often that I see some innocent things in the snapshots, like getting a value from a hash table. This doesn't mean that I should optimize hash table, it means that it's called zillions of times. So I should step back several stack frames and look carefully at the algorithm, trying to decrease its time complexity. Invocation count information helps to find the exact abstraction level I should pay attention to.

Sadly one can't use only tracing profiles, even if they also report the time each method takes. It's like quantum mechanics: the tracing overhead affects the time reported too significantly, and one can't trust it anymore.

In some cases it's obvious where the algorithm is inefficient, and sampling snapshots are enough. But increasingly often I find it to be not the case.

Yes, it's all well-known. And it's been about 15 years ago that I first read (and agreed) that algorithm optimization gives much better results than low-level one. But to learn something and to realize it, understand it deeply, are completely different things.

Saturday, September 21, 2013

Data flow analysis

I do many things in IntelliJ IDEA. One of the most interesting of them is data flow inspection, also known to the world under a somewhat strange name Constant Conditions & Exceptions. It analyzes your Java code, checks how you assign variables and what you write in if conditions about them, and issues warnings like Condition is always true or This method call can produce NullPointerException. The analysis it performs is quite advanced. Sometimes, it requires some thinking to understand that a warning is actually correct. Users even complain it's too intelligent (e.g. 1, 2, 3)!

I didn't maintain this inspection code from the very beginning, only in the last several years. During this time, I did several improvements, not counting numerous bug fixes. I like this inspection. I like even its bugs, because it's fun fixing them. So if you see a bug, please report it, I'll be grateful for this new fun!

Closures

I've supported anonymous classes, whose analysis should be aware of what happens outside the class. For example, o cannot be null inside Runnable and no warnings should be reported:

o = getSomeObject();
if (o != null) {
    new Runnable() {
      public void run() {
        o.doSomething();
      }
    };
}

Repeated invocations

The inspection used to warn on the following code:

@Nullable File getFile() {...}
...
if (getFile() != null) return getFile().getName();

It's perfectly correct, but quite inconvenient. Correct because getFile is marked as @Nullable, it's a method and its result may change between invocations. Even if it was a mutable field named file, it could be reassigned from another thread. But it's an inconvenient warning because most of the code out there isn't subject to these conditions. And I don't feel like making users change their perfectly valid code by extracting a variable out of getFile() just to make a tool happy.

So now IDEA produces no possible NPE warning on such calls. What's complicated about this fix is that it also doesn't presuppose getFile() being not-null. So if you write code targeted to mutable environment, and check the null-ness of the file again in case it's changed, this second if (getFile() != null) check is not reported as an always true condition. What happens inside is that after any method call (getFile in this case) IDEA marks this expression as having an unknown value, and doesn't track any possible constant conditions involving it.

Tracking other objects

I've also supported data flow on final fields of other instances, not only this. And on non-final fields. And on getter method calls on other objects. And on expression chains comprised of all of the above. The main catch is to wisely forget the facts you know about all those expressions when it's time (e.g. on qualifier reassignment).

Contracts

More recently, I've added declarative method contract support to the inspection. It was quite easy. What was hard, was not to generate any excessive warnings because of a contract. If you call a method transform(o) where transform has a contract of null->null, then IDEA analyzes the method call as o == null ? null : transform(o). And whenever IDEA sees that you compare a variable to null, it unsurprisingly starts suspecting that it might actually be null! So here's the discrepancy: with such a contract, IDEA actually sees the comparison with null, but the user doesn't. It's just a normal method call, but its argument mysteriously is considered nullable after the call. As I've said, it was hard to do and took me more than a week of thinking, but I did it finally!

It remains to add checking that the contract corresponds to what the method is actually doing, and to infer contracts automatically. But it's quite non-trivial, and I'm still trying to understand how it could be implemented.

Value is always constant

if (var==null) { ... someMethod(var) ... }

Quite probably, passing to a method a variable which is known to be null, is a bug and the user intended something else. Even if it's not, writing null instead of var may make the code clearer. So the inspection reports such uses of variables with values known to be null, true or false and suggests to replace them with constants.

Too complex?

Finally, the thing I did yesterday and I'm really proud of. There's a problem: IDEA marks some methods as being too complex to analyze. This warning irritates people and not everyone understands it means you don't get any NPE warnings inside that method. The warning appears in two cases:
  • when it just takes too long to analyze the method
  • when the number of possible variable states is too large
Having a profiler, the first cause is relatively easy to fix (and is actually quite rare). The second one is harder.

So, there's too many data flow states. Each state is basically a set of assertions about variables and their values. IDEA tracks these states for each point in your code, and each point may correspond to many states. For example, a nullable variable is represented by two states: in one of them it's null, in the other it's not-null. If you call a method on this variable, both states are considered, and if in any of them the variable is null (which is true) the NPE warning is issued.

For simplicity let's assume that all assertions are either in form variable==value or variable!=value.

So, how to get many-many different states? It's very easy. You may have, say, 10 variables. Just compare each of them with some constant value, and here you are:

//1 state here
if (var1 == 0) { print("var0"); }
//2 states here: var1==0 and var1!=0
if (var2 == 0) { ... }
//4 states here:
//var1==0 && var2==0, var1==0 && var2!=0
//var1!=0 && var2==0, var1!=0 && var2!=0
...
if (var10 == 0) {...}
//1024 states here, too complex :(

But it need not be so sad! It's not too complex for people to understand, why should it be so for the IDE? The 2 states after the first if statement actually are complementary, they only differ in their assertions about var1. In fact, they can safely be replaced with just one state with no assertions about var1.

And that's the basic idea. At each point where different control flow branches join, all the states are gathered and checked if they can be merged. This has reduced the number of states dramatically, at least for the samples of "too complex" methods in our code base that I've looked at. Hence, many methods are not too complex anymore, users are not annoyed and get useful highlighting instead. Profit!

One issue is that determining which states can be merged is quite computationally expensive at the moment, and it doesn't merge all the states it could. But I'm working on it.

Thursday, March 28, 2013

Java JIT and code cache issues


If you have a lot of Java code, you may run out of code cache used by JIT to compile frequently used code. When this happens, you can get an OutOfMemoryError or (if you're lucky) just a message in the console that JIT is stopped and the rest of the newly loaded code will be interpreted, so, slow.

You can increase the code cache size by specifying -XX:ReservedCodeCacheSize=xxx, but you'll run into the same problem after you add more code.

You can enable -XX:+UseCodeCacheFlushing to deal with that. It's supposed to make some cleanup when the code cache gets full, to free space for new method compilation.

Today I learned that it's not as good as it sounds, by reading some 25M of JIT logs on IntelliJ IDEA process (JDK 1.7 64-bit). The code cache got full, it was flushed, which really freed up some decent amount of code cache space. That's OK. There was a problem though. After this event all the JIT compilation was stopped. So the compiled code that happened to be flushed away was interpreted now, no matter how hot-spottish it really is.

This is quite noticeable. For example, code completion and goto-class navigation suddenly become very slow after a day of work. They match lots of names against your typed prefix, and unfortunately some of this matching code gets deoptimized and becomes visible in the CPU snapshots.

What to do? Wait for Oracle to fix this. And for now we increased code cache size for 64-bits JVMs.

Maybe we should also take a closer look at the logs and try to reduce the hotspot count. Just look at the methods that are compiled at runtime and try calling them less frequently to prevent them occupying code cache at all. This could also improve the overall performance.