Sat, 13 Dec 2008
FP-Syd #10.
Last Thursday night was the 10th meeting of FP-Syd, the Sydney Functional Programming group and we had just under 30 people show up at the Sydney offices of Thoughtworks to hear two speakers.
First up André Pang gave us a presentation about Apple's Objective-C language, but looking at it from a functional programming perspective. André had previously given this presentation at Galois a company known for its use of Haskell and for employing at least one well known Haskell hacker.
André started off his presentation demonstrating how Objective-C's object system worked living as it does on top of C. Interestingly this language has static type checking at compile time, but also a dynamically checked runtime to allow the usual dynamic typing trickery like monkey patching. He then went on to look at higher order programming using Objective-C methods, the garbage collection recently added to the Obj-C language (opt in and opt out) and the use of the very wonderful Low Level Virtual Machine (LLVM) in the compiler. There is a very nice set of slides for this presentations available here.
Next up we had Manuel M T Chakravarty speaking about Data Parallel Haskell. The really exciting thing about having Manuel speak on this topic is that Manuel is not just a user of this new feature; he was one of the people involved in designing it and adding it to the compiler. While Manuel's talk was mainly about how to use this new feature he could have just as easily dropped into explaining how it works under the hood. There were a couple of times when he really had to fight to urge to do so.
The main idea behind DHP is nested data parallelism an idea that came from a research language called NESL. DHP adds arrays to Haskell and these arrays are evaluated in parallel by the run time. As usual it is Haskell's by-default purity (ie no side effects) that makes it possible to retrofit nested data parallelism to the language without breaking anything else.
Manuel then went on to show the parallel versions of array/vector multiplication, quick sort and the Barnes-Hut algorithm for the n-body problem. The main differences between the parallel and the canonical sequential versions of these algorithms was the use of slightly different notation for the parallel arrays and a slight rearrangement / modification of the algorithms.
This new DPH stuff is in the latest 6.10.1 release of the GHC compiler. Manuel currently considers it a technology preview and not quite ready yet for serious use. The main problem is that the parallelism still doesn't scale the way the developers wish, probably due to bugs remaining in the scheduler and run time.
Personally, I found this an amazingly elegant approach to the problem of exploiting the soon to be widespread availability of machines with tens and even hundreds of CPU cores. Manuel's slides are available here and are a highly recommended read.
A big thanks to André and Manuel for engaging and thought provoking presentations. A big thanks also goes to Nick Carroll and Thoughtworks for making their facilities available for FP-Syd.