I’ve had a little opportunity to dabble with it now and, I have to say, I’ve taken to it. The interface is really nice and lean. It’s “core standard” too – it looks like sockets, it plays like sockets. It plays nicely with real sockets. The O/S can schedule around it like sockets – which is a huge boon on just about every OS running today.
And it’s incredible frugality and minimalism helps achieve impressive performance: one of my (-O0) unit tests manages to pump an incredible 65,000 messages from one thread and back to the original thread in under 1 millisecond, running on a virtual Ubuntu 10.04 on a physical core-2-duo.
Now, that’s probably not an entirely fair test: my code literally pumps 65,000 message out and then receives back 65,000 replies. The second thread gets run on a second CPU. And the method I’m using and the type of message probably gets maximum CPU cache performance out of it so … not massively surprising.
As a result – I haven’t actually made an effort to benchmark any more accurately than that, but I can believe some of the hype I’ve seen about it’s efficiency.
It makes a really nice alternative to more traditional IPC options such as signaling, mutexes etc.
The API also lends itself to multiple applications: be it in-process communications, inter-process communications or machine-to-machine (networking) communications.
Advantage? More performance. Using that one API for your network, process and thread communications is going to get it a lot of cache residency and bring down your communications code overhead.
I’ve put together a module for leveraging ZeroMQ for work-offloading, suitable for threading asynchronous work like real file/socket IO. I’ll give that it’s own post