Midrange Mega Launch 2013: #Speed2Lead in real life!
Although the Clariion platform was a great platform a couple of years ago, the constant growth of customers’ environments and their need for more performance automatically means that storage vendors constantly need to improve their products as well. EMC VNX was able to serve customers just right for the last few years. With the introduction of flash storage in storage arrays performance issues seemed gone, but know that flash devices can easily outperform any rotating device (disk) by 20, 30, maybe even 50 times and depending on the I/O pattern, the back-end of an array could be a serious bottleneck since it wasn’t originally designed for performance demands like that and the old FLARE that ran on the CPUs wasn’t sufficient for the performance demand. So although FAST VP helps getting hot data to performance efficient devices and cold data to the slower and cheaper devices, it’s obvious that the array technology needed to be upgraded. And just like every 3 years or so, the necessity for new technology has come.
The new VNX family: #Speed2Lead
It’s probably the worst kept “secret” in recent history: EMC was about to present a new VNX. Partners knew this already for weeks, if not months and at EMC World, in May, you could even get your hands on a LAB in wich the new features were explaned. But nevertheless I was pleasantly surprised when I got the call to represent the EMC Elect on the actual world premiere of the new VNX machines. And it’s in Milan, Italy! Why there you might think. Well, think about it: speed and EMC’s sponsorship of the Lotus Formula 1 team and guess where the F1 is being held in this particular week in September? Right: on the Monza circuit, near Milan. Since a few weeks EMC’s buzzing this hash-tag on Twitter and showing this movie on their website and it’s all about #Speed2Lead. So that sure promises a lot! And so I traveled to Milan, which for me was “only” an hour and a half of flying, but for a few other Elect members it was slightly longer. Jon Owings (@Jon_2vcps) and Dave Henry (@davemhenry) were flying in from the US and Preston de Guise (@backupbear) was doing a 28 hour trip from Australia to get to Milan! (and don’t forget it’s another 28 hours back as well) I must say that there was no sign of a jetlag there (must be all that Espresso he’s been drinking, hahaha). Lead by the fearless Stephanie McBride (@stephmcbride) the four of us received a warm welcome in the heart of motorsport in Italy: Milan. And yes, I know there’s a lot of motorsports centers in the world, but you simply cannot deny Milan being one of the hot places to be.
We had it coming
The industry gathered a lot of experience over the last few years and innovations like Fully Automated Storage Tiering (FAST) were optimized to a level where you could actually calculate how much flash storage you would need compared to the much cheaper rotating disks. As a rule of thumb most customers have roughly 5% of their data active and 95% not so active (also called “cold” data). So there we have a very important outcome. So what if a new storage array could be designed to be able to actually harness all the power needed for 5% flash and 95% rotating disks? When the Clariion was built processor cores had specific tasks like RAID calculations, I/O handling, DRAM cache handling, FAST cache handling, Data services and Management for example. But you can imagine that a large array with lots of random I/O could actually get the RAID core to reach 100% utilization, while the other cores had idle time to spare. The new design is now called MCx which stands for Multi Core. EMC calls the MCx “True Symmetric Multi Core Design”. Each core now can handle all tasks, so the actual code was fully rewritten so every task is now multi threaded and the array as a whole can now perform better as well as more efficiently. And besides that, since Intel has created faster multi core CPUs, the VNX will use those! MCx has been designed from the ground up to use multi cores in general, which means that if Intel delivers a faster CPU with more cores than today, MCx will be able to use those, without a need for the code to be rewritten.
The actual jump forward is that instead of the 200 k IOps the VNX series could do, the MCx series VNX can now reach over 1 million IOps! (Testing was done using an all-flash VNX8000 array with dual socket 16 core CPUs). And just look at the latency! Because of all the extra firepower the extra cores offer, latency is limited to a minimum!!
Or in number of VMs per storage array:
And since I’m showing nice graphs anyway, here’s a little extra for you:
Or if you like, you can get more IOps out of the system for OLTP environments. Or a combination of the two.
Major change for the new VNX models
All models can offer Unified now, which in my humble opinion is a good thing. It means you now don’t have to buy a larger model just to get that extra feature you needed, where you don’t really need the extra firepower for that bigger model. There’s also a monster added to the end of the lineup: The VNX8000. Although it will start with a max of 1,000 drives, it will be capable of going up to 1,500 a little while later.
So VNX has moved into VMAX territory then? That depends on how you look at it. Scalability, yes, performance, yes. Resilience, no: you still need a VMAX for maximum high availability (and it has more more cache and connectivity too and a VMAX can virtualize other storage arrays, so there’s still plenty of features there a VNX doesn’t have).
Active/Active Storage Processors
What? Did I see this correctly? Well yes, it’s in the graphic, so it must be true then. Historically seen almost all, if not all midrange storage arrays of most, if not all vendors were active/passive. Some vendors had a truly passive storage processor where others, like EMC, had an A/P configuration per LUN, which meant that both storage processors were active, only just 1 storage processor per LUN at a time. For true A/A you needed a Symmetrix array. Until now: if you want performance, you can now have A/A on a VNX as well! And mark this: if you want performance, so that’s where you need to pay attention: it’s only valid for RAID Group based LUNs, so not for pool based LUNs. For pools there’s still the ALUA functionality which means that a LUN can be accessed via the owning SP (optimal path) as well as the non-owning SP (non-optimal path). But to tell you the truth: since the VMAX does offer true A/A on all LUNs, I think it’s just a matter of time before those clever engineers think of a smart way to provide A/A on pool LUNs as well, without compromising performance (too much).
Better FAST Cache performance
Multi-Core FAST Cache is the next enhancement. Because devices are getting larger and larger, waiting for 3 accesses to actually promote a block into FAST Cache isn’t necessary anymore. Every accessed block goes into FAST Cache right away…. at least, until it’s 80% full. At that point it switches back to the 3x accessed mechanism.
The table to the right shows the FAST Cache limits per model. Please note that there are two models of SSDs in VNX arrays nowadays: SLC can be used for FAST Cache and eMLC normally used for storing data (in FAST VP pools or RAID Groups for example). SLC are used for FAST Cache due to the longer life and heavier write activity that’s expected. You’re not prohibited in using SLC drives for regular data but they are slightly more expensive than the eMLC models. In the lineup above the new series of VNXs is shown, except for one: the 5200. At first it wasn’t going to be there at all since its predecessor, the 5100 was block only, but EMC lifted that exception, so now all VNXs have the same features, except the 5200 start at a maximum of 125 drives and each model can grow larger and is a bit faster, obviously!
Even though it seems as if most enhancements are in the newer features of the VNX, but RAID Groups have enhancements as well! First of all there’s no dedicated CPU core anymore to do RAID calculations, but another thing is the Hot Spare handling. As we’ve seen in the Symmetrix world for years already, the VNX now has Permanent Sparing. This means that once a Hot spare is activated, it will permanently take the place of the drive it was replacing. So for all of you physical people out there: beware!!! The configuration you once thought of just might end up a lot different from before!
Smarter FAST VP
With FAST VP data was broken up in to “slices” and moved up or down the various performance tiers (or even within a tier). Each slice used to be 1 GiB in size, so for a 256 MiB hot spot, the whole 1 GiB slice had to be moved. With MCx the slice size is now 256 MiB, making it more efficient. Since more CPU firepower is available, handling smaller and therefore more of these slices is not an issue. Personally I think these slices just might become even smaller over time. Time will tell.
Have you seen the #Speed2Lead video? What did you think? Projecting it on a competitor’s building was a bold move. But it’s about block-level deduplication actually. Every vendor wants it and only few offer it, so guess whose market-share EMC is targeting here? Right! #NotAppy, hahaha. It’s a joke and I’m sure other vendors do the same the other way around, so even though they’re competitors, this is what you can expect in the US. But where was I? Right! Block-level dedupe; it finally arrived in the VNX! It’s even a standard feature for which you don’t need a license. It’s pool based dedupe and it will run as a process twice a day, so it’s not a true inline dedupe, but then again, it’s just a matter of time when even that won’t be an issue anymore. Certain all flash arrays already have this feature and also EMC’s own DataDomain has it, so let the engineers do their job and we’ll pick the fruits later.
The future of rotating disks
Finally it has arrived! 2.5 inch 15K SAS drives. For starters it’s available as a 300 GB model. And in the NL-SAS side of the offering we now have the 2.5 inch 1TB 7.2Kdrive. Speeding up the SAS drives means you’ll need less drives to get the same SAS-performance and with the larger NL-SAS drives this will further increase density.