Category Archives: Storage array - Page 5

Checking VNX mirrorview replication progress from the CLI

It’s a very short post, but in case you’re looking for the command:

naviseccli -h [SPx ip#] mirror -sync -listsyncprogress -name [LUN name]

 

It’s that simple!

Oh, I’m assuming you already have the logon credentials in a security file, if not, you need to add these to the command:

naviseccli -h [SPx ip#] -user [username] -password [super secret password] -scope [0-1-2] mirror -sync -listsyncprogress -name [LUN name]

How to start copy to hotspare manually

Hard Drive

I recently had to manually invoke a hot spare in a VNX 5200, but in Unisphere the option was greyed out.

Unisphere_No-CopyToHotSpare

On the CLI the command wasn’t supported. Now what?

CopyToHotSpare_fail

According to https://support.emc.com/kb/184890 the proper command is now

naviseccli -h [ip of one SP] copytodisk [source-disk] [hot spare]

CopyToDisk_Success

Using the “getdisk” command will show you the actual rebuild has started.

Bare in mind that the way to address disks is in the format “Bus_Enclosure_Disk”, so for example 1_2_3 means disk 3 (the 4th disk) in enclosure 2 on bus 1.

In Unisphere you can actually see the progress of the rebuild:

Disk Rebuild in Unisphere

Symmetrix offers a new kind of MAXimum Virtualisation (VMAX)

100-200-400K

The mother of all arrays has just been given an upgrade!

Well ok, maybe EMC did not produce the mother, since it’s fair to say IBM 3390 disk subsystem came first, but since the first Symmetrix came out in the early 90s with as much as a dozen or two disks, EMC has come a long way. They set the standard when it came to enterprise storage arrays. And it wasn’t just size that mattered back then: performance was and is still the number one objective for the Symms. After the “dark ages” (roughly before the year 2000) things got serious with the DMX series in 2003. The number of disks went up and loads of cache had to make sure that performance was guaranteed. DMX1, DMX2, DMX3/4 were quite a success.

And then there was VMAX

Read more »

Increased response times on VNX when using Windows 2012

Windows 2012 can cause higher response times on VNX

When Windows 2012 issues Trim or Unmap commands to thin LUNs on a VNX, the Storage Processor response times can increase or may initiate a bugcheck.

As part of disk operations to reclaim free space from thin LUNs, Windows 2012 Server can issue large numbers of the SCSI command 0x9E/0x12 (Service Action/Get LBA Status). This SCSI command results in what is called a “DESCRIBE_EXTENTS” I/O on the VNX Storage Processor (SP.) These commands are used as part of the Trim/Unmap process to see if each logical block address (LBA) that has been freed up on the host’s file system is allocated on the VNX thin LUN. The host would then issue Unmap SCSI commands to shrink the allocated space in the thin LUN, thus freeing up blocks that were no longer in use in the file system. RecoverPoint also issues these same SCSI commands when the Thin LUN Extender mechanism is enabled, which can cause similar performance issues. See knowledge base article KB174052 for more information about the RecoverPoint variation of this issue and how to prevent it.

Read more »

Optimizing performance using VAAI and the ESX MaxHWTransferSize setting

xcopy transfer size

If you’re running an EMC VNX using a lower version than block OE version 05.32.000.5.209, you might want to upgrade to the latest and greatest version (patch 209 or newer). The 209 offers EMC’s latest fixes and enhancements for VAAI performance. Many of the found performance issues have been fixed in the 209 code. However, in some environments sub-optimal performance has been detected with xcopy operations, or in some cases with the performance of non-xcopy IO during xcopy operations to the same pool.

Read more »