Specification FAQ

The InfiniBand™ specification defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems. By creating a centralized I/O fabric, InfiniBand enables greater server and storage performance and design density while creating data center solutions that offer greater reliability and performance scalability.

What is InfiniBand?
InfiniBand is an industry standard, channel-based, switched fabric interconnect architecture for server and storage connectivity.

Why is InfiniBand important?
High-performance applications – such as bioscience and drug research, data mining, digital rendering, electronic design automation, fluid dynamics and weather analysis – require high-performance message passing and I/O to accelerate computation and storage of large datasets.

In addition, enterprise applications – such as customer relationship management, fraud detection, database, virtualization and web services, as well as key vertical markets such as financial services, insurance services and retail – demand the highest possible performance from their computing systems.

The combination of 100Gb/s InfiniBand interconnect solutions and servers based on multi-core processors delivers optimum performance to meet these challenges.

What performance range is offered by InfiniBand?
InfiniBand offers multiple levels of link performance, which currently reaches speeds as high as 100Gb/s. Each of these link speeds also provides low-latency communication within the fabric, enabling higher aggregate throughput than other protocols. This uniquely positions InfiniBand as the ideal I/O interconnect for data centers.

What will it take to integrate InfiniBand Architecture into a virtualized data center?
Growth of multi-core CPU-based servers and use of multiple virtual machines are driving the need for more I/O connectivity per physical server. Typical VMware ESX server environments, for example, require use of multiple Gigabit Ethernet NICs and Fibre Channel HBAs. This increases I/O cost, cabling and management complexity.
InfiniBand I/O virtualization solves these problems by providing unified I/O on the compute server farm, enabling significantly higher LAN and SAN performance from virtual machines. It allows for effective segregation of the compute, LAN and SAN domains to enable independent scaling of resource. The result is a more change-ready virtual infrastructure.

Finally, in VMware ESX environments, the virtual machines, applications and vCenter-based infrastructure management operate on familiar NIC and HBA interfaces, making it easy for the IT manager to avail of the above value propositions with minimal disruptions and learning.

InfiniBand optimizes data center productivity in enterprise vertical applications, such as customer relationship management, database, financial services, insurance services, retail, virtualization, cloud computing and web services. InfiniBand-based servers provide data center IT managers with a unique combination of performance and energy-efficiency resulting in a hardware platform that delivers peak productivity, flexibility, scalability and reliability to optimize TCO.

What is RDMA over Converged Ethernet (RoCE) and how does it relate to the IBTA?
RoCE is an industry standard transport that enables Remote Direct Memory Access (RDMA) to operate on ordinary Ethernet layer 2 and 3 networks. RDMA is a critical technology at the heart of the fastest supercomputers and many of the largest data centers in the world. RDMA enables server-to-server data movement directly between application memory without any CPU involvement, resulting in performance and efficiency gains, while also significantly reducing latency. RDMA first became widely adopted in the High Performance Computing (HPC) industry with InfiniBand, but is now being leveraged by enterprise Ethernet networks with RoCE (pronounced like “rocky”). Given its broad expertise in RDMA technology, the IBTA developed the RoCE standard and released its first specification in 2010. The RoCE standard is defined within the overarching InfiniBand Architecture specification.

Does InfiniBand support Software Defined Networking (SDN)?
InfiniBand was one of the first industry standard specifications to be able to support SDN. An entity called the subnet manager provides traffic routing functionality and enables greater flexibility in how the fabric is architected. The fabric can be built with multiple paths between nodes and, based on the needs of the applications running on it, the subnet manager can determine optimal routes between nodes. Running the subnet manager on a single entity (as opposed to having network management on each switch throughout the fabric) enables the use of simple switches within the fabric with the associated cost savings.

In addition to flexible but efficient traffic routing, the subnet manager enables multiple levels of Quality of Service that guarantee different minimum shares of the available bandwidth. Applications can be configured to place data on lanes that are appropriate to the priority of their data. This gives an InfiniBand-based Software Defined Infrastructure the ability to support a variety of communication needs.

What is the expected demand for InfiniBand-enabled solutions?

According to IDC research, HPC, scale-out database environments, shared and virtualized I/O, and increasing demand from financial applications with HPC-like characteristics are driving and will continue to drive the rapid adoption of InfiniBand.

How are InfiniBand fabrics managed?
InfiniBand specifications have standardized InfiniBand management infrastructure. InfiniBand fabrics are managed via InfiniBand consoles, and InfiniBand fabric management is expected to snap into existing enterprise management solutions.

How big is the InfiniBand architecture development effort?
The InfiniBand Trade Association has grown from 7 companies to more than 40 since its launch in August 1999. Membership is open to any company, department of government or academic institution interested in the development of InfiniBand architecture. To see a list of current trade association members please visit the member roster.

What is the relationship between InfiniBand and Fibre Channel or Gigabit Ethernet?

The InfiniBand architecture is complementary to Fibre Channel and Gigabit Ethernet but offers higher performance and better I/O efficiency than either of these technologies. InfiniBand is uniquely positioned to become the I/O interconnect of choice and is replacing Fibre Channel in many data centers. Ethernet connects seamlessly into the edge of the InfiniBand fabric and benefits from better access to InfiniBand architecture-enabled compute resources. This will enable IT managers to better balance I/O and processing resources within an InfiniBand fabric.

What type of cabling does InfiniBand support?
In addition to a board form factor connection, it supports both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km).

How many nodes does InfiniBand support?

The InfiniBand Architecture is capable of supporting tens of thousands of nodes in a single subnet. The scalability is further extended with InfiniBand routers supporting virtually unlimited cluster sizes.