Towards InfiniBand-connected render farm

Recently I’m investigating InfiniBand networking for the use of render farm.

Many render guys might not ever heard of it, so let me briefly explain what is it.

InfiniBand is a low latency, high bandwidth network interface.

For example, InfiniBand QDR(40Gbps. This is the publicly available fastest InfiniBand configuration as of Jun 2011) can achieve 3.2GB/s peak bandwith. Transferring 10GB data within 3 seconds, awesome! This is about 50x more faster than 1 GbE ethernet.

InfiniBand has been widely used in HPC field, but now it seems going to Enterprise market.

In the future, I can easily imagine that network and disk I/O is the most major bottleneck of large scale rendering. This is why I am interested in InfiniBand for network I/O.

Here’s are slides showing my InfiniBand experience.

I am quite confident in InfiniBand right now. Its fast and cost-effective.

In the next phase, I am interested in ioDrive, the fastest SSS(Solid State Storage) disk from Fusion-io, since this might improve reading performance of massive textures and geometries from disk.