I recently had the “pleasure” to figure out what was wrong with a Brocade based SAN environment. Servers were loosing connectivity on one of the HBAs, but all links were online and further investigation was necessary.
Going through all the error counters on each of the long wave SFPs finally revealed one of the SFPs’ health as marginal (hence it was still online, but very buggy indeed). The webtools GUI showed this particular SFP als orange instead of green. Disabling and re-enabling this SFP didn’t help and I decided to shut this SFP for good. And guess what: all my troubles went away. The trunk this SFP was in went back to a non-redundant, but healthy state and all servers got back to normal operations and got their redundant paths back.
So to summarize the story: look for marginal or even faulted SFPs when vague connectivity issues arise. If links are redundant, shutting the faulty one might help.
Previously I wrote about setting the NTP, time and timezone settings in a Cisco switch and now it’s time for the same in a Brocade switch.
It’s in fact not that hard to do. Log in to the CLI and use the following commands:
tsclockserver ntp.domain.ext (make sure the DNS is set up properly first)
This will set the NTP server address in this switch to ip address 184.108.40.206. Set this only on the principal switch, as this switch will propagate the time to the other switches in the fabric.
To set the timezone use the following command:
This will ask for the region and country the switch is located in.
Choose 8 for Europe and 34 for the Netherlands and after verifying the setting, choose 1 (yes) to set the TZ.
Use the “date” command to verify the current time and date and TZ region:
Wed May 13 01:08:32 CEST 2015
This makes life a lot easier when troubleshooting!
If you have multiple datacenters or a multi tenant fibre channel environment and you’re using Cisco FC switches, it’s a best practice to use VSANs to separate the configurations of each location / tenant. To allow storage arrays and / or hosts in different VSANs to communicate with each other Inter VSAN Routing needs to be used.
If you need to have 2 EMC VNX storage arrays “talk” to each other for MirrorView for example over 2 or more datacenters (for data replication purposes that is) or hosts in one DC talk to storage in another DC, using transit VSANs (and therefore IVR) will keep your VSANs with equipment indoors and the slightly more vulnerable VSAN outdoors. If some farmer with his tractor rips your single mode fiber, only the outdoor VSAN will be fractured and the indoor VSANs remain unharmed. And of course communication between the remote sites is interrupted, but the indoor VSANs / fabrics remain unchanged.
Read more »
VMware now has this great new feature to be more in control of where its data blocks actually land on the storage system: VVOLs. But up until now EMC didn’t have a system capable of actually providing the back end for that. Until now I said. Starting with the VNXe 3200 all storage arrays are made vVOL capable and you can play around with that yourself. FOR FREE!
The Software Defined VNX is now a reality!
Read more »