Today’s enterprise data center is challenged with managing growing data, hosting denser computing clusters, and meeting increasing performance demands. As IT architects work to design efficient solutions for Big Data processing, web-scale applications, elastic clouds, and the virtualized hosting of mission-critical applications they are realizing that key infrastructure design “patterns” include scale-out compute and storage clusters, switched fabrics, and low-latency I/O.
This looks a lot like what the HPC community has been pioneering for years – leveraging scale-out compute and storage clusters with high-speed low-latency interconnects like InfiniBand. In fact, InfiniBand has now become the most widely used interconnect among the top 500 supercomputers (according to www.TOP500.org). It has taken a lot of effort to challenge the entrenched ubiquity of Ethernet, but InfiniBand has not just survived for over a decade, it has consistently delivered on an aggressive roadmap – and it has an even more competitive future.
The adoption of InfiniBand in a data center core environment not only supercharges network communications, but by simplifying and converging cabling and switching reduces operational risk and can even reduce overall cost. Bolstered by technologies that should ease migration concerns like RoCE and virtualized protocol adapters, we expect to see InfiniBand further expand into mainstream data center architectures not only as a back-end interconnect in high-end storage systems, but also as the main interconnect across the core.
For more details be sure to check out Taneja Group’s latest report “InfiniBand’s Data Center March” – available here.