New Gartner Report on Hyperconverged Integrated Systems Available from IBTA
We are pleased to announce access to an exciting new Gartner report, “Use Networking to Differentiate Your Hyperconverged System.” The recently published research outlines how hyperconverged integrated system (HCIS) vendors must tightly integrate networking to thrive in the face of increasing competition and more demanding workloads. Specifically, the report highlights how the growing scale of HCIS clusters emphasizes the challenges of expanding workload coverage and diminishes competitive product differentiation. Gartner explains why HCIS vendors should consider networking protocols such as InfiniBand™ and RDMA over Converged Ethernet (RoCE) when designing clusters to help address these challenges and add value for buyers.
To learn more about how HCIS professionals can focus on networking to gain competitive market advantage, download the full report from the InfiniBand Reports page.
IBTA Plugfest 30 Schedule and Deadlines Are Now Available
The IBTA is busy preparing for the 30th InfiniBand Compliance and Interoperability Plugfest, taking place October 17-28, 2016, at the University of New Hampshire Interoperability Lab. The bi-annual event provides an opportunity for participants to measure their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand products.
The official Plugfest 30 schedule and important deadlines are now available on the Plugfest page.
InfiniBand Makes an Impression at IDF 2016
The industry consistently turns to consortiums and standards bodies to ensure steady advancement of information technology through trusted compliance and interoperability programs. At this year’s Intel Developer Forum (IDF) in San Francisco, PCMag caught up with the IBTA to receive an update on the progress we have made in support of the HPC and enterprise IT industries. The discussion covered the benefits of the InfiniBand and RoCE fabrics as well as IBTA’s ongoing work to develop High Data Rate (HDR) to enable up to 200 Gbps.
Read PCMag’s IDF event recap to learn more about what the IBTA has in store for the HPC and enterprise IT segments.
InfiniBand Labeled “Top High Performance Interconnect” in New Rankings
There are many competing technologies in the High Performance Interconnect (HPI) market and as the adoption of scale-out architectures and cloud computing increases, the decision on which interconnect to use is becoming more and more critical. A new article from The Next Platform ranks the top HPIs to help guide IT pros make the right choice for their systems and applications. The piece gives InfiniBand high marks for its reliable inter-node communication, high bandwidth and low latency characteristics. Furthermore, InfiniBand is labeled as leading the way for all other HPI interconnects with “market support from a broad range of system and component vendors”.
To learn more about why InfiniBand tops the HPI rankings, check out the full article.
Big Data and InfiniBand Work Together for Fast Data Distribution
Big data is more than just extremely large sets of stored data. Making sense of the all that data is fast becoming the new issue for high performance computing (HPC) to solve. An article from The IT Briefcase suggests that HPC and big data can join forces in to address and advance data collection and analytics technologies. Not only is InfiniBand considered the most commonly used interconnect technology in supercomputers, it is also a basic requirement for HPC. Using InfiniBand is the fastest way to distributing and processing data across computational nodes, allowing big data platforms to scale to the size it deserves without worrying about bottlenecks.
For additional information on how big data and HPC are joining forces, read The IT Briefcase’s analysis.
RoCE Interoperability List Features Higher Test Speeds, Additional Vendors
Having finalized the RoCE test results from Plugfest 29, the IBTA is excited to announce the availability of its new RoCE Interoperability List. Designed to support data center managers, CIOs and other IT decision makers with their planned RoCE deployments for enterprise and high performance computing, the latest edition features a growing number of cable and equipment vendors and Ethernet test speeds. The new list now features 50 and 100 GbE test scenarios, which complements the IBTA’s existing 10, 25 and 40 GbE interoperability testing. This expansion gives RoCE deployers confidence in knowing that as they integrate faster Ethernet speeds in their systems, their applications can still leverage the advantages of tested RDMA technology.
Check out the latest RoCE Initiative blog for more information on the results.
NVM Express over Fabrics Continues to Garner Popularity
Non-volatile Memory Express over Fabrics (NVMe-oF), a new specification that defines an alternative transport for NVMe other than PCI Express, enables the NVMe storage interface protocol to operate over InfiniBand, Ethernet, and other network fabrics. The growing momentum for NVMe-oF includes new developments around RDMA mapping for RoCE and InfiniBand. Support for NVMe-oF continues to grow as compatible products begin to hit the market.
For a detailed overview of NVMe-oF and its supporting technologies, read SearchSolidState’s coverage.
Mellanox Simplifies RDMA Deployments with Enhanced RoCE Software
Mellanox announced the availability of new software drivers for RoCE. These new drivers are intended to streamline RDMA deployments on Ethernet networks while enabling high-end RoCE performance for lossless operations. The RoCE software drivers make it easier for customers to deploy RoCE quickly, which improves efficiency and reduces costs in the cloud, storage and enterprise market segments.
View Mellanox’s official announcement to learn more.
The University of Tokyo Selects Mellanox EDR InfiniBand to Accelerate its Newest Supercomputer
Mellanox announced that Information Technology Center at the University of Tokyo will leverage Mellanox’s EDR 100Gb/s InfiniBand Switch systems and adapters for its new supercomputer. Integrating high performance EDR InfiniBand technology in the new supercomputing will enhance the university’s research in computational science and engineering, machine learning and data analysis.
Read Mellanox’s announcement for more details.
Mellanox Launches Integrated Networking Solutions That Accelerate NVMe Over Fabrics
Mellanox announced a family of end-to-end networking solutions and software for connecting NVMe solid-state storage to the fabric. These new Mellanox adapters and processors support offloads that efficiently connect solid state drives (SSDs) directly to the network. This simplifies system design and reduces both power and storage system costs. The new ConnectX-5 adapter includes supporting hardware for the NVMe over Fabrics platform.
For more on the announcement, check out the official press release from Mellanox.
Supercomputing Conference 2016, November 13-18, 2016
The 29th iteration of the International Conference for High Performance Computing, Networking, Storage and Analysis will take place in Salt Lake City, UT. SC16 will feature a special technical program, exceptional industry and research exhibits, comprehensive education programs, and many networking opportunities. The IBTA will be attending this year’s show to promote our mission and support our exhibiting members. The following members will be attending SC16:
- Broadcom (#3677)
- Bull (#721)
- Cisco (#3450)
- Cray (#1731)
- Finisar (#706)
- Fujitsu Limited (#831)
- Hewlett Packard Enterprise (#1531)
- IBM (#1018 and #1042)
- Intel (#1819 and #2121)
- Mellanox (#2631)
- Microsoft (#1501)
- Molex (#1947)
- Oracle (#1231)
- Samtec (#2522)
For more information on the event, visit the website.
For questions or comments, please contact firstname.lastname@example.org