InfiniBand is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems.
A true fabric architecture, InfiniBand leverages switched, point-to-point channels with data transfers that generally lead the industry, both in chassis backplane applications as well as through external copper and optical fiber connections. Reliable messaging (send/receive) and memory manipulation semantics (RDMA) without software intervention in the data movement path ensure the lowest latency and highest application performance.
This low-latency, high-bandwidth interconnect requires only minimal processing overhead and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded applications that scale from two nodes up to clusters utilizing thousands of nodes. Through the availability of long reach InfiniBand over Metro and WAN technologies, InfiniBand is able to efficiently move large data between data centers across the campus to around the globe.
HDR 200Gb/s InfiniBand is shipping today, and InfiniBand has a robust roadmap defining increasing speeds well into the future. The current roadmap shows a projected demand for increasingly higher bandwidth with new NDR 1.2Tb/s InfiniBand products planned for 2020.