Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1001837.1
Update Date:2011-02-11
Keywords:

Solution Type  Technical Instruction Sure

Solution  1001837.1 :   How to Replace a Disk in a Sun Storage[TM] 3510 FC JBOD Array  


Related Items
  • Sun Storage 3510 FC Array
  •  
Related Categories
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - 3xxx Arrays
  •  

PreviouslyPublishedAs
202514


Applies to:

Sun Storage 3510 FC Array
All Platforms

Goal

Goal


This document explains how to replace a failed disk drive within a Sun Storage[TM] 3510 FC JBOD (Just a Bunch of Disks) array.

Solution



Steps to Follow

1. For a full list of 3510 FC array supported configurations including updates to 
the below support levels, check the latest versions of:
Sun StorEdge[TM] 3510 FC and 3511 SATA Array Release Notes (817-6597) Sun StorEdge[TM] 3000 Family Installation, Operation, and Service Manual (Appendix B: Using a Standalone JBOD Array (3510 FC Array Only)
The Sun StorEdge 3510 JBOD Expansion unit is supported in a single (standalone)
array configuration with the following support conditions:
- Support for volume servers only: 220R, 250, 420R, 450, V120, V280, and V880 - Support for Solaris[TM] 8, 9 and 10 Operating Systems only - Support for Veritas Volume Manager (VxVm) 3.5, 4.0, 4.1, 5.0 or later, and Sun Solaris[TM] Volume Manager(SVM)/Solstice Disksuite[TM] (SDS) - Multi-pathing and/or load balancing between a server and single array via VxVM DMP only (no mpxio support) - No daisy-chaining of JBODs; single JBOD connected to single or dual FC 2Gb HBAs only - No hub or switch between server and JBOD - Data only, no booting from a FC JBOD - No cluster support, neither VCS nor Sun[TM] Cluster 2. Ensure all packages and patches are installed according to the release notes section: "Installing Sun StorEdge SAN Foundation Software". You can alternatively find the correct SAN foundation 4.x patches to download by reviewing the Sun StorEdge SAN Foundation Software 4.4 Installation Guide, 817-3671
In most case, only the patches for qlc, luxadm, cfgadm are needed.

3. Remove the disk configuration from the system. Important, never use the luxadm remove_device command
Place the device offline for the failed disk using luxadm(1M):
# /usr/sbin/luxadm -e offline /dev/rdsk/cxtyd0s2 Remove entries from /dev using devfsadm(1M): # /usr/sbin/devfsadm -Cv Note: Now when the 3510 JBOD disk is under SDS or SVM with the latest luxadm patch, this procedure should no longer fail with the message : # /usr/sbin/luxadm -e offline /dev/rdsk/cxtyd0s2 devctl: I/O error 4. Now replace the failed disk For the physical replacement, there is no way to locate the failed disk in the box. Keep the chart below in mind for an array with a boxid set to 0. targets 0 3 6 9 1 4 7 10 2 5 8 11 If the boxid is modified, just subtract (16*boxid) from the target to get the position in the chart: boxid = 1 target from 16 to 27 boxid = 2 target from 32 to 43 boxid = 3 target from 48 to 59 boxid = 4 target from 64 to 75
Note: Remember that the boxid can be set by the hidden switch under the front left
plastic cover.

After disk replacement, devfsadmd(1M) daemon should automatically recognize the new disk,
create the device and link so that the disk is available by format. If not, the following
commands should be used to diagnose the issue:
# /usr/sbin/luxadm -e port
devices/pci@1f,0/pci@1/SUNW,qlc@2/fp@0,0:devctl CONNECTED

Note:
If you get a "NOT CONNECTED" error on the path used by 3510, check the fiber connection on box and server using cfgadm(1M). # /usr/sbin/cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected configured unknown c1::/dev/lus unknown connected configured unknown c1::rmt/0 tape connected configured unknown c2 scsi-bus connected configured unknown c2::lus1 unknown connected configured unknown c3 fc-private connected configured unknown c3::2100000c5020555d disk connected configured unknown c3::2100000c50205653 disk connected configured unknown c3::2100000c50205a3f disk connected configured unknown c3::2100000c50205aad disk connected configured unknown c3::2100000c50205d18 disk connected configured unknown c3::215000c0ff002f2a ESI connected configured unknown c4 fc connected unconfigured unknown
If cfgadm(1) doesn't show the above result, more work will be required.
When the controller isn't seen or seen unconfigured run the following:
# /usr/sbin/cfgadm -c configure cx
When drives appear with a tag condition set to "unusable" issue the following: # /usr/sbin/luxadm -e forcelip devices/pci@1f,0/pci@1/SUNW,qlc@2/fp@0,0:devctl # /usr/sbin/luxadm -e port <--- gives the pathname


Refer to below CR if receive devctl error documented in step 3 above:
BugID 5075852 for workaround@You can internally check the configuration with the What Works With What(WWWW) matrix at:

http://webhome.sfbay/networkstorage/sales/matrix.html





Special note regarding Step #3

According to BugID 4921470   and BugID 6376642

"The luxadm(1M) utility is not supported for monitoring and managing Sun

StorEdge 3000 family arrays. However, certain luxadm arguments and

options can be used, including display, probe, dump_map, and rdls."

The revision will appear in section B.4 of the Sun StorEdge 3000 Family



Change History
Date: 2011-02-11
User Name: [email protected]
Action: Currency & Update





Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback