A quick heads-up this time about building your own lab environment
Sometimes you just want to run a VNX, Avamar, PowerPath, Data Domain or Isilon as a virtual machine to see how things work, or to write work instructions. And EMC offers a lot of these virtual appliances for free!
Take a look at these:
It’s been a while since the VNX2 was born: September 2013, I remember it very well. Being a part of the EMC Elect, I was invited to be at the actual launch in Milan (Italy) and what a ride it was! The whole launch was wrapped around Formula 1 technology and it sure was “speed 2 lead“. That “old” VNX2, which I’m still perfectly happy with by the way, was a revolution in my humble opinion: multi-core everything, in short MCx. And yes, it was like everything just went faster, smoother and better.
But with new technologies popping up every so many months now, it was time for a new mid-range storage array. Flash storage isn’t a novelty anymore, it’s a must! And the “old” hybrid arrays were fine, but needed some fine-tuning. With flash devices growing bigger every quarter or half a year and faster as well, the whole back-end needed an upgrade. The old 6 Gb back-end (x4) needed an upgrade.
Read more »
Java, it’s a curse. And now you suddenly need to upload the spcollect files to EMC and Java isn’t installed or incompatible and Unisphere won’t start.
Make sure you have NAVISECCLI installed and just do it from the CLI!
Read more »
Smart zoning examples
In my smart zoning post from last February I already presented the way to get started with Cisco smart zoning. I initially planned to give a more detailed calculation on how much time you can save if you were using smart zoning compared to SIST zoning.
I was talking to an EMC SAN instructor (Richard Butler) this week and after I did a little white boarding and used my hands to picture how massive a traditional SIST zone environment would be, we agreed smart zoning is the way to go.
Read more »
VMware now has this great new feature to be more in control of where its data blocks actually land on the storage system: VVOLs. But up until now EMC didn’t have a system capable of actually providing the back end for that. Until now I said. Starting with the VNXe 3200 all storage arrays are made vVOL capable and you can play around with that yourself. FOR FREE!
The Software Defined VNX is now a reality!
Read more »
It’s just another short post on a single command again. This time I was looking for an easy way to get started on ESRS on the latest OE for Block code or the newer MCx code (33.071 or newer).
First of all you need to set up DNS in your VNX machine. In Unisphere, go to settings and click on “configure DNS”.
Also, if there’s a firewall blocking internet traffic, you need to make sure the storage processors can reach *.emc.com over tcp ports 443 and 8443.
After this you can use the following command on the CLI:
naviseccli -h [SPx ip#] esrsconfig -agentProvision -user [Online Support logon name] –password [Online Support super secret password]
Repeat this for the other SP as well.
Read more »
It’s a very short post, but in case you’re looking for the command:
naviseccli -h [SPx ip#] mirror -sync -listsyncprogress -name [LUN name]
It’s that simple!
Oh, I’m assuming you already have the logon credentials in a security file, if not, you need to add these to the command:
naviseccli -h [SPx ip#] -user [username] -password [super secret password] -scope [0-1-2] mirror -sync -listsyncprogress -name [LUN name]
I recently had to manually invoke a hot spare in a VNX 5200, but in Unisphere the option was greyed out.
On the CLI the command wasn’t supported. Now what?
According to https://support.emc.com/kb/184890 the proper command is now
naviseccli -h [ip of one SP] copytodisk [source-disk] [hot spare]
Using the “getdisk” command will show you the actual rebuild has started.
Bare in mind that the way to address disks is in the format “Bus_Enclosure_Disk”, so for example 1_2_3 means disk 3 (the 4th disk) in enclosure 2 on bus 1.
In Unisphere you can actually see the progress of the rebuild:
Windows 2012 can cause higher response times on VNX
When Windows 2012 issues Trim or Unmap commands to thin LUNs on a VNX, the Storage Processor response times can increase or may initiate a bugcheck.
As part of disk operations to reclaim free space from thin LUNs, Windows 2012 Server can issue large numbers of the SCSI command 0x9E/0x12 (Service Action/Get LBA Status). This SCSI command results in what is called a “DESCRIBE_EXTENTS” I/O on the VNX Storage Processor (SP.) These commands are used as part of the Trim/Unmap process to see if each logical block address (LBA) that has been freed up on the host’s file system is allocated on the VNX thin LUN. The host would then issue Unmap SCSI commands to shrink the allocated space in the thin LUN, thus freeing up blocks that were no longer in use in the file system. RecoverPoint also issues these same SCSI commands when the Thin LUN Extender mechanism is enabled, which can cause similar performance issues. See knowledge base article KB174052 for more information about the RecoverPoint variation of this issue and how to prevent it.
Read more »
Changing the time for the weekly heartbeat
People with Clariion or VNX systems installed on site know that these arrays will email “home” (that’s EMC/you) once a week on a seemingly random date/time. Ok, once the day of the week and the time are set, each week the “I’m still alive” email will go out at that time. But what if you don’t want to have that email sent out at Thursday at 2:47AM and you want all of your arrays to send out that email on Saturday at noon sharp? You will need to adjust the parameters. I didn’t find a way to change the weekday, so I’m changing the time less than a day before it needs to run. So if I want it to run on Saturday at noon, I could run this script on Friday after noon. It will pick the next available day automatically.
Read more »
It’s that time of the year again: EMC World
15 Thousand nerds gathering in Las Vegas for the yearly week of EMC propaganda. That’s what a lot of people might think it is anyway. It’s the 2014th edition… that doesn’t sound right. Ehm, oh well, you get my drift. Well, maybe it is nerd-week, but hey: every vendor who thinks they’re the best in something is doing this sort of events and besides that, it’s a great event to meet people you haven’t seen in a year or so.
Social networking in real life
Social networking, gathering knowledge of things to come, looking for solutions to challenges you already have in your normal day jobs, looking for insights in things on your wish list. Bacon, unicorns, hardware and a loooot of “software”, since that’s the trend since a few years. No matter how you explain it:
IT is in Las Vegas, baby!
Read more »
A while ago I talked about Hot Spares and how they are picked when a rebuild is necessary. It was almost 2 years ago and you can read it here.
Since then the rebuild / equalize technology has changed! Well, not for existing systems, but the new VNX family aka VNX2 does things a bit differently.
In the old days when a drive failed, a suitable Hot spare would kick in and the unprotected LUNs (regarding the failed drive) would be rebuild onto the Hot Spare. After a while, when the rebuild was done and the failed drive was replaced by a replacement drive, the data on the Hot spare would need to be copied to that new drive. This was called equalizing.
In the VNX2 (with MCx) this last step doesn’t exist anymore. So that means the Hot spare that was used to contain the rebuilt data is not longer a Hot Spare! It has become a regular drive! And that replacement drive will now be a new Hot Spare. When configuring a new VNX2 you’d see rules about Hot Spares and you simply don’t even need to configure Hot Spares anymore. Just make sure you have some unconfigured drives and you’re good. Your VNX2 will make sure they’re used as Hot Spares from then on.
If I remember correctly the DMX4 had a similar feature back in 2008, but it now flowed to the midrange platform as well.
If you have a primary LUN which is replicated using MirrorView/S and you decide to run SnapView snapshots on the remote side, consider that writes to the secondary LUN may have to wait for the COFW activity to complete before an acknowledgement is sent back to the primary array.
So if you’re performing tests on the remote site by using SnapView snapshots, you may want to consider suspending the MirrorView session(s) first in order to guarantee performance on the production site.
A good scenario would be to create clones from the temporary fractured mirrors and as soon as the clones are fully in sync, split the clone from its primary – being the MirrorView secondary – and start the resync in MirrorView.
After the write from the primary array (1) a COFW (Copy On First Write) (2) must take place if the write (1) overwrites a block that hasn’t been written to yet in order to maintain the point in time of the snapshot. After the COFW (2) is complete the acknowledgement (ACK) (3) can be sent back to the primary array.
So even if the snapshot isn’t used by a host, there’s already an increased activity on the remote array.
If the snapshot is in use by a host that writes to the snapshot, an unchanged block on the secondary LUN need to be copied to the RLP (Reserved LUN Pool) first before the overwrite can take place. This will also slow down any ACKs that need to be sent back to the primary array.
Be very careful when starting SnapView sessions on a secondary LUN and even more careful when using the secondary LUNs since it can have a severe impact on the response times of the primary LUN.
The need for weekly messages
EMC’s Symmetrix already knew this feature for a decade or so (or even longer), but since a few years EMC’s pushing customers to make every array to email home once a week so they can keep track of its pulse. And they’re not joking about its importance either, since once an array skips a beat, a severity 1 ticket is being created to get that fixed as soon as possible. EMC truly seems to care about the arrays they have running all over the world, so they’re indeed in good shape and being monitored actively.
Read more »
Midrange Mega Launch 2013: #Speed2Lead in real life!
Although the Clariion platform was a great platform a couple of years ago, the constant growth of customers’ environments and their need for more performance automatically means that storage vendors constantly need to improve their products as well. EMC VNX was able to serve customers just right for the last few years. With the introduction of flash storage in storage arrays performance issues seemed gone, but know that flash devices can easily outperform any rotating device (disk) by 20, 30, maybe even 50 times and depending on the I/O pattern, the back-end of an array could be a serious bottleneck since it wasn’t originally designed for performance demands like that and the old FLARE that ran on the CPUs wasn’t sufficient for the performance demand. So although FAST VP helps getting hot data to performance efficient devices and cold data to the slower and cheaper devices, it’s obvious that the array technology needed to be upgraded. And just like every 3 years or so, the necessity for new technology has come.
Read more »
The countdown has started
Just 3 weeks and a few days to go and it’s EMC World again! Time to meet my old and new friends and finally getting some rays. the one thing I’ve been missing the last few weeks it’s the sun and I guess most Europeans agree with me. Don’t know what it’s like in other parts of the world, but I certainly need more heat than what we’re having now.
Read more »
I want to bring the discussion about “VNX data replication” to your attention. It’s on ECN and the URL is https://community.emc.com/thread/149825. If you want to ask about replication specifically, you can post your questions here or on ECN.
The author, Rupal Rajwar is USPEED certified and works for EMC eServices Customer Support division since 2010.
Although the actual story I want to point out to you is on the EMC Community Network website (also called ECN), I would like to invite you to join the discussion there:
If you’d like to discuss anything here, you’re welcome to do so.
Let’s join the discussion!
How does an EMC Clariion or VNX decide which Hot Spare will be used for any failed drive?
First of all not the entire failed drive will be rebuilt, but only the LUNs that reside on the failed drive. Furthermore all LUNs on the failed drive will be rebuilt to the same Hot Spare, so a single failed drive will be replaced by a single Hot Spare. So if for example a 600GB drive fails with only 100GB worth of LUNs on it, in theory a 146GB drive could be invoked to rebuild the data. The location of the last LUN block on the failed drive specifies how large the Hot Spare needs to be. If on a 600GB drive the last block of the last LUN sits on “location 350GB”, but the amount of disk space used by all LUNs residing on that drive is 100GB, the 146 and 300GB Hot Spares aren’t valid choices, since the last block address is beyond the 300GB mark (350GB). So valid Hot Spares would be 400GB or larger.
Read more »