Solaris InfiniBand SW stack short summary

by syoyo

I’m been investigating InfiniBand(RDMA) things on Solaris 10/11.

My ultimate goal is to realize fast and reliable InfiniBand + ZFS storage on top of (Oracle) Solaris 11 or OpenIndiana.

Following is the memo of my survey of InfiniBand stack status on Solaris 10/11.

At this time, OpenIndiana and Nexenta is not based on Solaris 11 kernel/kernel modules(and will never), so many things will fallback to Solaris 10 case.

OFED

– OFED ported to Solaris 11 is based on OFED 1.5.3
– OFED ported to OpenIndiana seems based on OFED 1.3. OFED 1.5.3 grabbed from Oracle Solaris 11 doesn’t work on OpenIndiana 151a.

Kernel/kernel module components(10/11)

– IPoIB
– SDP
– SRP
– uDAPL?
– umad, uverbs, ucma

All these components are kernel component, so you don’t need to install open-fabrics package(OFED upper layer library ported to Solaris).

IPoIB performance on OpenIndiana 151a

Measured with netperf, on AMD AthlonII Neo + IB SDR

1 GbE : 110 MB/s
IB SDR : 620 MB/s

Theoretical peak of IB SDR is around 900 MB/s, so the number of IB SDR will increase if you have much more better CPU.

SRP performance on OpenIndiana 151a

Measured with hdparm against a file created onto /tmp filesystem(ramdisk), on AMD AthlonII Neo + IB SDR

IB SDR + SRP : 558.94 MB/s

IB SRP seems slower than IPoIB, even though measurement situation is not same.
Will need an investigation further.

Solaris as a InfiniBand-ready storage

On current OpenIndiana 151a, you can’t use OFED upper layer tools, e.g. ibstat, ib_read_bw.
Also, you can’t do a RDMA programming using RDMA-CM with same programming API in Linux.
But you can use IPoIB and SRP.
SDP also might work, but I haven’t confirmed it yet.

Thus, to use OpenIndiana as a InfiniBand + ZFS storage, current solution goes to deploing a storage system with IPoIB or SRP.

You might not able to use IPoIB-CM to get a better network performance.

Advertisements