For the last year, Internet2 and the Corporation for Education Network Initiatives in California (CENIC) have offered their members the option to connect at 100 gigabits per second. This is 10 times faster than the current 10 gigabits per second speed that has been in place for several years.
Making the jump to the higher speed is of great interest to Stanford researchers, especially as more of them have larger and larger data sets that they either want to bring to campus from remote locations or make available to their remote colleagues.
University IT’s Networking team, working with CENIC, started by planning how the physical 100 Gb link would reach campus. Equipment was ordered and deployed, followed by low level testing to insure there was connectivity from campus to the CENIC switch and on to the Internet2 network.
When initial testing was successful, additional testing was performed with real systems to put more realistic load over the connection. These tests also went well and clearly showed that the link could handle a much greater capacity than our traditional 10 Gb links.
An important part of the testing involved Stanford’s participation in the national SC14 conference held each year in November. Armed with what we had learned from our campus experience, the Research Computing team and our partners in Networking took off for New Orleans to demonstrate our high-speed data transfer over 100 Gb networks. Once the conference started, the link was put to the test. We had three systems at each end (New Orleans and campus), each with dual 10 Gb network cards in each system, making the maximum bandwidth we could achieve at 6 x 10 Gb or 60 Gb. Initial tests pushed right up to the theoretical limit — 58 Gb — with the three systems all sending traffic as fast as they could. Continued testing demonstrated it was commonplace to have 50 Gb going simultaneously in both directions.
Now with the conference behind us, University IT is looking at more practical deployment options to make the 100 Gb Research and Education network available to the campus. The first locations that will get access to the 100 Gb network will be the Stanford Research Computing Facility building, where there are already a number of researchers with systems hungry for data connectivity. From there, more buildings will have their backbone connections upgraded to 10 Gb capability to allow the researchers and labs within those buildings to access the research network at higher speed.
The 100 Gb connection is a welcome addition to the campus arsenal of networking capabilities.