Category Archives: Storage array

Stretching and unstretching ActiveCluster PODs on Pure arrays

Scenario

Suppose you have two (or more) datacenters and you’re running a true active / active setup, meaning that clusters are spread over both sites and each host has access to both the local volume as well as the identical writable copy on the second site. You’ve accomplished this by setting up ActiveCluster and some PODs with volumes in them and everything is working fine and the PODs are in sync, so volumes can be written to on both locations.

The OTA team is testing a new application on one of the sites without the cluster being spread over both sites. When the testing is done it’s time to go to production and move the application to a production cluster, so formerly local volumes need to be added to a POD, because they need to be written to on the second location as well (or simply need to be replicated to the DR site to have the data on two locations).

Moving one or more volumes to a POD – steps

In order to move one or more volumes into PODS, you can either create a new unstretched POD or – and this can be tricky – you need to unstretch an existing POD first. If a POD contains volumes that are actually being accessed on both sites, you need the hosts to stop using the volumes on the array that’s going to be deleted from the POD for a short while. When hosts are configured correctly (having access to each volume on two arrays), you don’t need to take any action as the I/O will automatically be redirected to the only surviving Pure array. Of course this only applies to the hosts that are actively acessing volumes in the remote POD that you want to unstretch; hosts on the “surviving” array will continue to access that array and will not be able to access the volumes on the second site for a short while. Stopping I/O is necessary because when unstretching PODs, one side loses all access to the volumes in that particular POD.

If hosts are not configured to access volumes on both sites, you will need to failover the resources to the surviving site. When no I/O is going to the array that doesn’t contain a copy of the OTA volumes, the POD that will contain these formerly OTA volumes will need to be unstretched.

If this POD contains a large number of volumes, there’s something interesting to be seen later on! Pay attention!

When you’re sure all I/O to volumes in the POD on the second array is stopped or multi-site access is configured correctly, you can start unstretching the POD: select the POD, delete the remote array as seen from the volume(s) that you need to add to that POD and as soon as the POD contains only 1 array, you can add the volumes that need to be present on both sites.

Resyncing

When all OTA volumes in our example are added to the unstretched POD, you can add the remote Pure array and the resyncing will start.

Depending on the activity on the volumes in that POD, this can be a short transition or somewhat longer (sometimes even hours if it’s a really active POD), but there’s one interesting thing that I noticed during the numerous resyncing activities that I’ve whitnessed so far: when the resync starts, it looks like it’s taking ages to get somewhere and it might even show a disappointing progress, until the resync reaches 20%.

All of a sudden the resync will look like it’s speeding up and as soon the the counter is over 20% it will reach 100% in a matter of seconds (or minutes). I cannot find any proof that this progress has a different meaning other than what it looks like: 20% meaning only 20% is actually resynced, but my assumption is that 20% means it reached 100% and above 20% means it’s catching up on the last few “dirty” blocks on the sides that are being written to.

The good thing is that as soon as you realize that 20 is the new 100%, so waiting for the resync isn’t half as bad as it looks at first!

Happy resyncing, everybody!

How to test the alerting in a Pure Storage FlashArray

When configuring SMTP or syslog for alerting the easiest way to configure is in the GUI, simply because everything you need to configure is right there in plain sight.

For this example, we will assume the syslog servers and SMTP relay host and sender domain have been specified. If not, these can be set from the GUI under Settings > System > Syslog servers and Settings > System > Alert Routing.

But to test if it all works is a different story: there’s no test button!

So we need to log on to the CLI. Use your favorite SSH client and log on the the array.

Syslog

First you can view if the syslog was configured according to your liking by entering the command:

purelog list

You should now see the configured syslog servers and the ports that are used.

To test if syslogging works enter the command:

purelog test

SMTP

For emails the command to view the settings is:

purealert watcher list

You should now see the email addresses that will be used whenever the array needs to send an email. To test this you need to use the following command:

purealert watcher test [email protected]

If all goes well you will then receive an email similar to:

Hello,

This is a test message from your Pure Storage Array.

Controller Serial: PCTFL1953173B
-Pure Storage Array PUREARRAY-027-ct0

How to match an Windows (HyperV) disk to a SAN attached disk using the wwn

Hard Drive

Where do I find the wwn of a disk in Windows / HyperV? That’s the question.

There are a number of identifiers to find out which LUN is which disk, but the only undeniably unique number to find out which disk is which LUN is by using the globally unique wwn number of a LUN. The question is: “where can I find the wwn of a disk in HyperV?”

The LUN number, as assigned by the storage array can be found by using diskpart:

Read more »

How to list the naa-numbers of LUNs and VMware VMFSs on a Dell EMC Unity system

EMC Unity

In the Unity the naa numbers (wwn) are listed in the “block” section, but not in the VMware section. If you view the LUNs from the host perspective, the naa numbers are visible, but in the list of LUNs ¬†would have been easier. You can list all details from LUNs and datastores on the CLI by using the uemcli commands:

uemcli -d 10.11.12.13 -u Local\admin -p [password] /stor/prov/luns/lun show -detail > unity.txt
uemcli -d 10.11.12.13 -u Local\admin -p [password] /stor/prov/vmware/vmfs show -detail >> unity.txt

Now simply open the unity.txt file and voila: there they are!

New Dell EMC Unity lineup: Unity XT 380F, 480F, 680F and 880F?

While looking for something totally different I stumbled on a few new Unity XT (?) model numbers.

On this dellemc.com website I spotted some Chinese publication on this new model.

[edit] The new and working website is now working: www.dellemc.com.

Read more »