Thursday, April 5, 2012

Parallelism and the Limits of Languages

While we make heavy use of the core of Clojure, we don't use its concurrency primitives (atoms, refs, STM, etc.) because a function like pmap doesn't have enough fine grained control for our needs. We opt instead to build our own concurrency abstractions in Clojure on top of the outstanding java.util.concurrent package.

 Once upon a time, in the days when RAM was not quite as cheap as it is now, I worked on a team that had a giant in-memory cache of data that it used to do various financial calculations. This data had, with the growth of the financial markets it represented, grown to be too big to fit into the memory of a standard server without having GC absolutely wreck the process. Having exhausted all minor efforts to tune the data, and not wanting to buy an actual supercomputer, the team turned to Azul, who at that time sold boxes with enough memory to fit our process and a fairly nifty no-pause GC. Great! Except there was one catch: instead of having the super fast CPUs we were used to, it had hundreds of tiny little slow cores (with rather crappy fpus). And on these hundreds of tiny little cpus our code was very, very slow. So we set out to parallelize.

When you're forced into parallelism at that level of detail, you quickly realize how difficult it is to have a generic way of doing it. The fact that languages like scala and clojure will allow you to just say "apply this logic over the collection in parallel" is very cool, and a great way for developers to dip their toes in parallel processing of their collection data. But it breaks down quickly when you are talking about vastly different types of data running in vastly different types of computation. If you need to eke out every ounce of performance, you want every computation to be tuned to the right batch size and the right number of threads, and the size of the thread pool for task X may depend on whether it tends to run at the same time as task Y or task Z. Don't forget that task Z, if requested, must be prioritized above all else. The size of the batches themselves may depend on the type of object inside them, and how that affects the cache lines of the processor you're running on. Oh, and of course, you're not just tuning this for prod, it has to at least run to completion on your integration boxes and they are of course slightly different processors. And don't forget, your developers aren't writing code on Azul boxes, they're writing code on desktops with 2 cores each; threading even the test data to a pool of 50 doesn't make any sense, but they still need to run things in parallel or they'll never see the stupid threading bugs they added when they forgot to use concurrent data structures.

I've always thought that the next truly big leap in VMs and languages will be those that can do smart things with threading without making developers sweat too much. If we're really entering a world of more and more data over more but slower cores, we need a language and a VM that revolutionize our management of threads the way Java and the JVM have revolutionized our notion of memory management. The functional languages out there help, but you can't forget the underlying VM magic that will really make this possible. Look to the example of memory management: it's not Java itself that enables you to write code without worrying about deleting references only when they are truly not being used, but rather the VM that watches all objects and knows how to find the live ones and delete those that are no longer referenced, all without the developer needing to indicate a thing. For us to conquer the thread problem, we need a VM to observe the behavior of the system beyond just the number of cores and help us to appropriately occupy CPUs over collections of varying size and composition. Languages can force developers to think about applying logic over their data (and thus help with parallelism as a mindset), but they will never solve the problem of automated parallelism without the VM to tune it in the background.

There will always be people that need to tune their code down to exactly what instructions run on what core, just as there are still people that need to do their own memory management, but it shouldn't be a requirement for getting good mileage out of those cores. I can't wait for the day when calling submit feels as archaic as calling malloc.

6 comments:

  1. The question in my mind is whether there is more benefit in commoditizing parallelism in the VM than in languages - recognizing that both will probably be inadequate for many problems - your last paragraph says as much. The quote from Aria Haghighi above doesn't invalidate the language approach as much as say it wasn't enough for his set of problems. Indeed, one could argue that the VM, being even further away from the problem space than the language, has less of a chance of being more successful.

    The success of the JVM's commoditization of memory management is an interesting case, but I think indicative that memory management is a "simpler" problem (but not to underestimate the achievement!) than that of the concurrency problem.

    As usual, thanks for the thought-provoking piece.

    ReplyDelete
  2. Bravo, you've written this idea down very clearly. I've also thought about this and I agree that this is key. A sufficiently expressive language combined with a VM or JIT could slice and dice the code so that it uses exactly the right batch size, number of threads, and even internal data representation for the best performance and occupancy of the target CPU/GPU cores. It could even be tuned at run-time based on other system load. This would be incredibly nice to have.

    ReplyDelete
  3. The idea is sound, but as far as I know there are already quite a few very smart people at e.g. Intel and IBM working for quite a while on automatically detecting opportunities for parallelization in byte code with only some success.
    Unfortunately this is a non-trivial problem as a lot of parallelization scope is on application level and hard to impossible to detect on byte code level, so I guess we'll have to wait for the VM magic for a little longer

    ReplyDelete
  4. Hey,
    very nice blog!I’m an instant fan, I have bookmarked you and I’ll be checking back on a regular.See u.

    Stain Release

    ReplyDelete
  5. Since the title is Parallelism and the Limits of Languages, I have to ask - have you used Haskell?

    We "sacrifice" pointers in languages that provide garbage collection. In order for the language to provide better parallelism we need to sacrifice unrestrained side-effects.

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete