Greetings InfiniBand Community,

As many of you know, every fall before Supercomputing, over 100 volunteers – including scientists, engineers, and students – come together to build the world’s fastest network: SCinet. This year, over 168 miles of fiber will be used to form the data backbone. The network takes months to build and is only active during the SC10 conference. As a member of the SCinet team, I’d like to use this blog on the InfiniBand Trade Association’s web site to give you an inside look at network as it’s being built from the ground up.

SCinet IB CablesThis year, SCinet includes a 100 Gbps circuit alongside other infrastructure capable of delivering 260 gigabits per second of aggregate data bandwidth for conference attendees and exhibitors – that’s enough data to allow the entire collection of books at the Library of Congress to be transferred in well under a minute. However, my main focus will be on building out SCinet’s InfiniBand network in support of distributed HPC applications demonstrations.

For SC10, the InfiniBand fabric will consist of Quad Data Rate (QDR) 40, 80, and 120-gigabit per second (Gbps) circuits linking together various organizations and vendors with high-speed 120Gbps circuits providing backbone connectivity throughout the SCinet InfiniBand switching infrastructure.
Here are some of the InfiniBand network specifics that we have planned for SC10:

  • 12X InfiniBand QDR (120Gbps) connectivity throughout the entire backbone network
  • 12 SCinet InfiniBand Network Participants
  • Approximately 11 Equipment and Software Vendors working together to provide all the resources to build the IB network
  • Approximately 23 InfiniBand Switches will be used for all the connections to the IB network
  • Approximately 5.39 miles (8.67 km) worth of fiber cable will be used to build the IB network

The photos on this page show allSCinet NOC/DNOC equipment racks the IB cabling that we need to sort through and label prior to installation and the numerous SCinet Network Operations Center (NOC) systems and distributed/remote NOC (dNOC) equipment racks getting installed and configured.

In future blog posts, I’ll update you on the status of the SCinet installation and provide more details on the InfiniBand demonstrations that you’ll be able to see at the show, including a flight simulator in 3D and Remote Desktop over InfiniBand (RDI).

Stay tuned!
Eric Dube
Eric Dube
SCinet/InfiniBand Co-Chair