Tag Archives: Clariion

Cisco Smart zoning – part II: examples

Smart zoning examples

In my smart zoning post from last February I already presented the way to get started with Cisco smart zoning. I initially planned to give a more detailed calculation on how much time you can save if you were using smart zoning compared to SIST zoning.

SAN fabric

I was talking to an EMC SAN instructor (Richard Butler) this week and after I did a little white-boarding and used my hands to picture how massive a traditional SIST zone environment would be, we agreed smart zoning is the way to go.

Read more »

How to change the VNX weekly heartbeat date and time

Changing the time for the weekly heartbeat

People with Clariion or VNX systems installed on site know that these arrays will email “home” (that’s EMC/you) once a week on a seemingly random date/time. Ok, once the day of the week and the time are set, each week the “I’m still alive” email will go out at that time. But what if you don’t want to have that email sent out at Thursday at 2:47AM and you want all of your arrays to send out that email on Saturday at noon sharp? You will need to adjust the parameters. I didn’t find a way to change the weekday, so I’m changing the time less than a day before it needs to run. So if I want it to run on Saturday at noon, I could run this script on Friday after noon. It will pick the next available day automatically.

Read more »

CX or VNX Mirrorview with Snapview active on the remote side

If you have a primary LUN which is replicated using MirrorView/S and you decide to run SnapView snapshots on the remote side, consider that writes to the secondary LUN may have to wait for the COFW activity to complete before an acknowledgement is sent back to the primary array.

So if you’re performing tests on the remote site by using SnapView snapshots, you may want to consider suspending the MirrorView session(s) first in order to guarantee performance on the production site.

A good scenario would be to create clones from the temporary fractured mirrors and as soon as the clones are fully in sync, split the clone from its primary – being the MirrorView secondary – and start the resync in MirrorView.

MirrorView has to wait for SnapViewAfter the write from the primary array (1) a COFW (Copy On First Write) (2) must take place if the write (1) overwrites a block that hasn’t been written to yet in order to maintain the point in time of the snapshot. After the COFW (2) is complete the acknowledgement (ACK) (3) can be sent back to the primary array.

So even if the snapshot isn’t used by a host, there’s already an increased activity on the remote array.

If the snapshot is in use by a host that writes to the snapshot, an unchanged block on the secondary LUN need to be copied to the RLP (Reserved LUN Pool) first before the overwrite can take place. This will also slow down any ACKs that need to be sent back to the primary array.

Conclusion

Be very careful when starting SnapView sessions on a secondary LUN and even more careful when using the secondary LUNs since it can have a severe impact on the response times of the primary LUN.

EMC VNX Replication

I want to bring the discussion about “VNX data replication” to your attention. It’s on ECN and the URL is https://community.emc.com/thread/149825. If you want to ask about replication specifically, you can post your questions here or on ECN.

The author, Rupal Rajwar is USPEED certified and works for EMC eServices Customer Support division since 2010.

Which Hot Spare will be used for a failed drive? (EMC Clariion / VNX)

Hard Drive

How does an EMC Clariion or VNX decide which Hot Spare will be used for any failed drive?

First of all not the entire failed drive will be rebuilt, but only the LUNs that reside on the failed drive. Furthermore all LUNs on the failed drive will be rebuilt to the same Hot Spare, so a single failed drive will be replaced by a single Hot Spare. So if for example a 600GB drive fails with only 100GB worth of LUNs on it, in theory a 146GB drive could be invoked to rebuild the data. The location of the last LUN block on the failed drive specifies how large the Hot Spare needs to be. If on a 600GB drive the last block of the last LUN sits on “location 350GB”, but the amount of disk space used by all LUNs residing on that drive is 100GB, the 146 and 300GB Hot Spares aren’t valid choices, since the last block address is beyond the 300GB mark (350GB). So valid Hot Spares would be 400GB or larger.

Read more »