IBTA Expands Interoperability and Virtualization Support with New InfiniBand Architecture Specification Updates
The IBTA has announced the public availability of the InfiniBand Architecture Specification Volume 2 Release 1.3.1 and a Virtualization Annex to Volume 1 Release 1.3. The new Volume 2 specification release expands interoperability and performance management functionality across the InfiniBand ecosystem for both high performance computing (HPC) and enterprise data center networks. Additionally, the Virtualization Annex extends support for multiple virtualized endpoints within InfiniBand hardware. Both of these updates are essential to expanding the performance and scalability of future data centers.
For more details on the new InfiniBand Architecture Specification updates, view the official IBTA press release.
InfiniBand Continues to Lead HPC and Petascale Systems on the TOP500 List; 100GbE RoCE-enabled System Makes Its First Appearance
The latest results of the TOP500 List have confirmed InfiniBand’s place as the leading interconnect family for the world’s most powerful HPC and Petascale systems. InfiniBand now accelerates 65 percent of the HPC platforms on the list and nearly half of the Petascale segment at 46 percent. Additionally, 100GbE RDMA over Converged Ethernet (RoCE) technology made its first appearance on the TOP500 List along with five 40GbE clusters. Through CPU offloads and higher utilization of compute resources, RoCE increases overall application performance for Ethernet-based systems.
View the list in its entirety on TOP500.org.
InfiniBand to Play a Role in Combat Cloud Computing
The term “combat cloud computing” refers to an operating model for military applications where information, data management, connectivity, and command and control are core mission priorities. To enable reliable functionality for secure battlefield applications, combat cloud computing will need to leverage both commercial data center technologies as well as high performance embedded computing. This can be achieved through rugged packaging, thermal management techniques and enabling unrestricted bandwidth through high-speed fabrics such as InfiniBand.
Read Military Embedded Systems’ article to learn more about combat cloud computing and how interconnects such as InfiniBand can be beneficial.
University of Birmingham Leverages InfiniBand in New HPC Systems
The University of Birmingham has announced the launch of the new BEAR (Birmingham Environment for Academic Research) Cloud. The BEAR Cloud, designed with researchers in mind, supplements the already wide-ranging set of IT services at the university. Now academics across all disciplines, including medicine, archaeology, physics and theology, can leverage the private cloud facility for their research. To meet the demanding computational workloads of the various research groups, the new system will utilize “best in class hardware offloads” from InfiniBand adapters and switches.
To learn more about the BEAR Cloud, check out the full article on HPCwire.
InfiniBand Interconnects Accelerate New DoD HPC Modernization Program
HPC continues to grow within a variety of industries, exemplified by recent US Army contracts with Cray and SGI. These contracts will result in new HPC systems, administration and maintenance for the Department of Defense’s (DoD) High Performance Computing Modernization Program (HPCMP). The HPCMP has already received its first supercomputing cluster, which leverages InfiniBand EDR interconnects, that has been installed at the Army Research Laboratory’s DoD Supercomputing Resource Center in Aberdeen, Maryland.
For additional information on the DoD’s HPCMP program, read HPCwire’s coverage.
InfiniBand and Exascale Go Hand in Hand
One major goal of the annual Supercomputing Conference is to showcase why “HPC matters.” A recent insideHPC article takes that topic a step further and explores specifically why HPC “interconnects matter.” In the piece, the author discusses how in a new era of in-networking computing, offload networking architectures, like InfiniBand, will lay the foundation for Exascale performance. He goes on to explain the advantages of off-load network architectures as well as the unprecedented scalability and efficiency they provide.
For Read the full article to learn more about InfiniBand’s role in improving scalability and overall HPC system performance.
InfiniBand Becomes First Interconnect to Break Through 200 Gb/s Barrier
It can be difficult for some IT managers to let anything but the familiar Ethernet fabric connect their networks, but that could be changing quickly. New HDR 200 Gb/s InfiniBand switches and adapters, available in 2017, are changing the way HPC system designers, hyperscalers and cloud architects are viewing their network interconnect options. This breakthrough in bandwidth and plans for future speed increases are giving data center managers added confidence to consider switching to InfiniBand as higher bandwidths open up more opportunities for high performance applications.
For a detailed overview of the new HDR 200 Gb/s InfiniBand solutions and their impact on the industry, read The Next Platform’s coverage.
How the Network Has Become the New Storage Bottleneck
The transition from Hard Disk Drives (HDDs) to Solid State Drives (SSDs) has become one of the most noteworthy storage trends occurring in the industry today. There are a variety of factors driving this transition, but the shift has resulted in a new storage bottleneck at the network level that can cause data transfer and latency complications. Advanced networking protocols like InfiniBand, RoCE and NVMe over Fabrics (NVMe-oF) are helping accelerate interconnect performance to match the faster speeds of solid state storage solutions.
Check out a recent Datanami article for more insight on the new storage bottleneck.
RoCE Provides Network Performance Benefits for Modern Mainframes
RoCE has become a critical technology that leverages RDMA to lower CPU overhead and increase performance of Ethernet-based data centers. In new Brocade article, Dr. Steve Guendert analyzes the benefits and impacts of RDMA, RoCE and Shared Memory Communication over RDMA (SMC-R) for Ethernet networks. The article discusses the Ethernet fabric in detail and RoCE’s role in pushing the Ethernet-based data center into the modern era of computing.
Read Brocade’s article for a detailed overview of RoCE’s advantages for Ethernet network performance.
Mellanox Announces World’s First 200 Gb/s HDR InfiniBand Solutions
Mellanox recently announced the industry’s first 200 Gb/s HDR InfiniBand switches and adapters that will be available in 2017. These new high-speed InfiniBand solutions are aimed at advancing the next generation of HPC, machine learning, big data and storage platforms. As the fastest network interconnect in the world, InfiniBand is the clear choice to maximize application performance and system scalability while minimizing overall total cost of ownership for data centers.
View Mellanox’s official announcement to learn more.
IBTA Annual Members Meeting, December 8, 2016 at 8 a.m. PST
The 2016 IBTA Annual Members Meeting will be taking place Thursday, December 8, 2016, at 8 a.m. PST via web conference. Attendees can expect an overview of 2016 activities and an update on the organization’s plans for next year. Specifically, members will receive a State of the Association and an overview of 2017 goals and strategies from the steering committee as well as updates from the working group chairs. Additionally, the results of the annual election to determine the IBTA Steering Committee Directors for 2017 will be announced.
For registration information, contact firstname.lastname@example.org