EMC World 2013, day 1: SuperNAP and Cisco
The first day of EMC World started with an exclusive EMC Elect trip to one of the world’s largest data centers, located in Las Vegas called “Switch SuperNAP”. In fact this was the SuperNAP 7 facility and although @MissyByte didn’t allow us to make pictures with live customer equipment I can tell you that it was very impressive. We actually walked inside an important piece of the internet itself! Or as she told us: you’re now inside the internet! One of the greatest assets SuperNAP has is that they don’t advertise, so attracting customers is exclusively done by word of mouth. But they’re doing a good job! Having 37 independent internet feeds available might be worth mentioning. In this data center you actually can fill up each rack to full capacity, no matter how dense the equipment!! So here it all comes down to floor space. How much equipment can you fit in? Floor space is what counts. Power as well as cooling is NOT a problem. All power supplies combined are approximately 100 MW and SuperNAP 7 is the 2nd most important facility in Nevada to be protected by the military when under attack or suffering from a natural disaster (Hoover Dam being #1). And based in the Nevada desert sometimes it’s more efficient to cool using outside air, sometimes using evaporators, but although I forgot the other ways of cooling (chilled water probably being one of them) the most important feature was that each cooling machine is intelligent enough to decide what cooling strategy to follow. Inside the facility no water is allowed at all. Cooling is efficiently handled by the facility itself!
Rob Roy, the owner of SuperNAP invents all the cooling and structural designs himself and they even guarantee 100% uptime. So far they didn’t have a single outage and considering they started in 2000, this is very impressive. One of the few pictures we were allowed to take were of the N+2 power supplies as you can see in the first picture and of the conference room with the impressive “Stargate” in the end of it. From idea to design to actually setting it up it only took days. The personnel even calls this round circle on the end of the room the “Stargate” and after asking when the “Stargate” was going to be operational, Rob Roy seem to have said “soon”. The military grade guards followed us on each step we took, but it did feel pretty safe having them around. And considering the kind of customers that SuperNAP houses I can imagine why.
After heading back towards the Venetian we were introduced to more serious matters. My first session of the day was from Cisco and I am pretty excited about what they had to say! It seems they’re closing the gap with Brocade that has been around since Cisco entered the SAN stage.
Predictions say that by the year 2020 we will have 10 times as many physical and virtual servers, information will have grown 16 times and flash memory will have grown 4 times. We have seen FCoE emerging and being used more often, but predicting the rate at which companies replace storage protocols is difficult as each company and each design requires a unique pace and set of protocols.
Cisco announced two new switches in the MDS portfolio: the 9700 series of enterprise core switches and the 9250i multi protocol router.
The MDS 9710 will have an impressive 1.5Tbps (max) of bandwidth available per slot and all 384 ports can operate at line rate. And by line rate I mean 16 Gbps! The largest module available today is the 48 x 16 Gb FC, but the 48 x 10 Gb FCoE will come out in Q4-13.
A very impressive feature is the modular design of the backplane. Fabric cards are scalable and the loss of a single fabric card doesn’t have to mean loss of bandwidth of the blades. Each fabric module adds 220 Gb of bandwidth to each slot, so if you need more available line speed bandwidth for your front end ports, simply add more fabric modules.
The airflow is front to back, which is great since most data centers operate with a cool and hot area and blowing from side to side as the MDS9500 series did, didn’t really help in having an efficient airflow.
The new 9710 has 8 so called payload slots and eight 3,000 Watt power redundant supplies. And off course 2 supervisor engines for higher availability. The 6 fabric modules are located behind the fans in the back of this monster. The 48 port line card has 500 B2B credits at its disposal, but after activating the enterprise license this is upgraded to 4096 credits, just like in the previous 92xx and 95xx models.
The 9250i switch will be made available with 40 x 16Gb FC port and an additional 8 10Gb FCoE ports. Additional options include an I/O accelerator and the Data Mobility Manager card.
All things considered I think having these 2 new switches in its portfolio Cisco has some serious power at hand to close the gap on Brocade.