Tuesday, 26 January 2010

Naive parallelism with HLVM

The latest OCaml Journal article High-performance parallel programming with HLVM (23rd January 2010) described a simple but effective parallelization of the HLVM implementation of the ray tracer. Comparing with similarly naive parallelizations of the fastest serial implementations in C++ and Haskell, we obtained the following results:

These results exhibit several interesting characteristics:

  • Even the naive parallelization in C++ is significantly faster than HLVM and Haskell.
  • C++ and HLVM scale well, with performance improving as more cores are used.
  • Despite having serial performance competitive with HLVM, the naively-parallelized Haskell scales poorly. In particular, Haskell failed to obtain a competitive speedup with up to 5 cores and its performance even degraded significantly beyond 5 cores, running 4.4× slower than C++ on 7 cores.

The efficiency of the parallelization can be quantified as the speed of the parallel version on multiple cores relative to its speed on a single core:

These results show that HLVM obtained the largest speedup: 6.3× faster on 8 cores. C++ obtained the next largest speedup: 5.2x faster on 8 cores. Haskell obtained the worst speedup: only 2.9× faster on 8 cores.

More sophisticated parallelizations could obtain better performance still. Currently, the parallel for loop over the rows of the image causes each thread to compete for an iteration and results in poor locality: if a thread computes one row then it is unlikely to compute the next. Work-stealing queues and recursive subdivision of the problem space would greatly improve locality and, therefore, performance. Also, the initial construction of the scene tree has some opportunity for parallelization and further performance improvements.

Furthermore, there is plenty of room to optimize HLVM itself and some simple hacks can quantify the possibilities. Disabling bounds checking provides a substantial 20% performance improvement. Disabling the shadow stack (and, therefore, disabling GC) provides another 25% performance improvement. With these two changes, HLVM is only 24% slower than C++. In particular, HLVM's current design makes heavy reuse of its own generic routines even in performance critical sections such as manipulating the shadow stack. Optimizing these routines by hand could leverage some of this potential. For example, by removing bounds checks from manipulations of the shadow stack.

6 comments:

Alp Mestan said...

Regarding Haskell/GHC parallel performance, this may be of interest : Yielding more improvements in parallel performance

And of course : New GHC I/P Manager, first sets of benchmark numbers

David said...

The articles on your blog are frustrating.

Despite the fact that everything you say is accurate... it systematically looks like you are trying to make Haskell look bad... a newcommer comming to these blog not knowing the context will jump to the conclusion that Haskell sucks.

Perhaps the whole "naive" implementation/parallelization should be given a break and you should tune c++/HLVM scenarios and pit them against Saynte's not-so-naive implementation.

Because, in the end, a company that will go through all the trouble to parallelize an algorithm will do it for performance and will do everything it can to get the best implementation and not the naive one.

I believe it is one of the features Haskell can rightfully claim as having a simple way to lazily build the scene.

Perhaps you should actually compare the actual lines of code in each implementation instead.

Flying Frog Consultancy Ltd. said...

@David: These results prove that Haskell gives you a choice between naive parallelism that scales and performs poorly or difficult parallelism that scales well but still performs poorly. So is it not correct for a newcomer to jump to the conclusion that Haskell sucks?

Incidentally, I have repeated the experiment with a naive parallelization of Lennart's fourth version and the results are almost identical to those for the fifth: Haskell gives poor absolute performance and scales badly.

You say "go through all the trouble to parallelize an algorithm" but the C++ required only 8 different lines of code out of 143 to achieve good results. That is a comparatively tiny effort. Even Saynte's "one line change" was actually 3 lines different.

A future article will compare more extensive rewrites in other languages to Saynte's extensive rewrite in Haskell.

Finally, you say that "one of the features Haskell can rightfully claim as having a simple way to lazily build the scene" which is true but is it useful? Outside Haskell, laziness is one of the least used functional features and, in fact, I believe laziness is precisely the reason Haskell does so badly on this benchmark.

David said...

Your article from Jan 16th seemed to show a parallelized version of the Haskell 1 algorithm that scaled rather aggressively.

I just find it rather too common that I can read here and there that "Haskell does this, or F# yields that" while one would rather read "The Haskell -implementation- scales poorly."

Then again, one's got to admit that the Haskell's (or should I say GHC?) last core bug is rather aggravating. One's got to question the usefulness of that "feature" (is it?) where it considers all the cores to be wholly dedicated to the process.

Also, I find an issue with GHC's lack of support for a viable hashtable implementation. I find that one can stick them in a sheer lot of naive algorithm implementation for excellent results.

For example, I came up with the following (rather elegant, I think) non-brute force algorithm to solve ProjectEuler's 11th problem, that seems to build and run (in LINQPad) in under 50ms (granted brute-force C implementations solve the problem even faster).

Perhaps one could write it even better and faster in F#:

var dic = Enumerable.Repeat(0, 0).ToDictionary(o => new {i = 0, j = 0}, o => 0);

Func ContainsOrDefault = (i, j) =>
{
var val = 0;
dic.TryGetValue(new {i, j}, out val);
return val;
};

Func MaxMatch = (iRatio, jRatio) => dic.Select(o => o.Value
* ContainsOrDefault(o.Key.i + iRatio * 1, o.Key.j + jRatio * 1)
* ContainsOrDefault(o.Key.i + iRatio * 2, o.Key.j + jRatio * 2)
* ContainsOrDefault(o.Key.i + iRatio * 3, o.Key.j + jRatio * 3))
.FirstOrDefault(o => o != 0);

var source = @""08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48""
.Split(new [] {'\r', '\n'}, StringSplitOptions.RemoveEmptyEntries)
.SelectMany((o, i) => o.Split(new [] {' '}, StringSplitOptions.RemoveEmptyEntries).Select((d, j) => new {i, j, v = int.Parse(d)}))
.OrderByDescending(o => o.v)
.Select(o =>
{
dic.Add(new {o.i, o.j}, o.v);
return new [] {MaxMatch(0, 1), MaxMatch(1, 0), MaxMatch(-1, 1), MaxMatch(1, 1)}.Max();
})
.First(o => o != 0)
.Dump();

Flying Frog Consultancy Ltd. said...

@David: Excellent question. The compelling results for the first Haskell version that we published on the 16th January were actually misleading. Further investigation revealed a subtlety we had missed in our first article. The superior scalability occurs only because the first version uses a different algorithm. Specifically, with a symbolic bounding volume hard-coded. However, we have since ported the same algorithm to OCaml such that it can be leveraged and it produced similarly spectacular results in terms of scalability and it works with arbitrarily complicated scenes. However, if you crank up the resolution so even the smallest spheres are visible (e.g. 9,512 or 10,1024 or 11,2048 or 12,4096) then that first Haskell is not competitively performant when parallelized: it takes 81s on this benchmark whereas C++ takes only 12.6s.

Naively parallelized second to fifth Haskell versions using the parameters from this article all show the same result: Haskell stops scaling at 5 or 6 cores. That is an important result because it is contrary to all of the Haskell literature that portrays purely functional programming as a panacea for parallelism and the paradigm of the future in this multicore era.

The last core slowdown is certainly a silly problem but I think there is more to it that the explanations I have seen. HLVM uses exactly the same kind of naive spinlock and it never suffers a last core slowdown on Linux.

That's a very interesting project Euler problem and a nice solution! I'll have to study it in detail...

jay paul said...

Nice post! Can’t wait for the next one. Keep stuff like this coming.

HP - Split 2-in-1 13.3" Touch-Screen Laptop - 4GB Memory - 128GB Solid State Drive - Modern Silver

HP - ENVY Touchsmart 14" Refurbished Touch-Screen Laptop - 8GB Memory - 500GB Hard Drive - Midnight Black