Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1017707.1
Update Date:2009-12-03
Keywords:

Solution Type  Technical Instruction Sure

Solution  1017707.1 :   Logical Unit Number(LUN) 0 , A SCSI Requirement  


Related Items
  • Sun Storage 3510 FC Array
  •  
  • Sun Storage 6320 System
  •  
  • Sun Storage SAN Foundation Software
  •  
  • Sun Storage T3 Array
  •  
  • Sun Storage 6020 Array
  •  
  • Sun Storage 6120 Array
  •  
  • Sun Storage 6920 System
  •  
  • Sun Storage 3511 SATA Array
  •  
Related Categories
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - 3xxx Arrays
  •  
  • GCS>Sun Microsystems>Storage Software>Sun Storage SAN Software
  •  
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - 6xxx Arrays
  •  
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - Other
  •  

PreviouslyPublishedAs
228912


Description
This Technical Instruction is intended to explain the importance of having a Logical Unit Number(LUN) 0 on any storage array.
A few issues in the field have been reported, where, depending on the array, problems have been seen on hosts connected to arrays without a LUN 0. While this Technical Instruction shows examples of Sun StorEdge[TM] 3510 and SE6x20 arrays specifically, it can apply to other storage. However, that still needs to be verified.

LUN 0, is a Small Computer Systems Interface(SCSI) protocol requirement, and needs to exist for proper communication with the driver.

In SCSI-3, this requirement is explicitly stated in the SCSI standards documentation - SAM-2 section 4.7.2 SCSI target device.

In SCSI-2, the requirement is implicit in the first sentence of X3T9.2375D (SCSI-2 section 8.1.1.2)

Also, from Sun's very first Hardware RAID array, the RSM2000 / Sun StorEdge[TM] A3000 and onward, the array management software has always depended on LUN0 being available. Not having a LUN 0 was unqualified and has typically always led to unexpected behavior.

As an example, with the Sun StorEdge [TM] 351x and SE99xx families of arrays that are able to present multiple (SCSI-) targets on a single host-channel, it is even more important to realize that every attached host MUST be able to communicate with a LUN 0 on EVERY presented target in order to ensure proper operation.

In some cases this requirement even leads to having to map multiple different LUN 0's on the same (SCSI-) target, most commonly when LUN security (World Wide Number(WWN)) filtering is being used.



Steps to Follow
Example of a Sun StorEdge SE6x20 array, Storage Area Network(SAN) setup.
Comments in Italic , Commands are in BOLD , output of commands is normal text.
Here is the mapping on the Sun StorEdge SE6x20 array:
array00:/:<20>lun map list
Lun No     Slice No
---------------------------
1         1
2         2
3         0
---------------------------
** Total 3 entries **
array00:/:<27>lun perm list
lun  slice   WWN         Group Name  Group Perm  WWN Perm    Effective Perm
--------------------------------------------------------------------------------------------------------
1   1   default         --      --      none        none
1   1   210000e08b07912c    f6800b-dom-c    rw              none        rw
1   1   210000e08b07d92c    f6800b-dom-c    rw              none        rw
2   2   default         --      --      none        none
2   2   210000e08b07912c    f6800b-dom-c    rw              none        rw
2   2   210000e08b07d92c    f6800b-dom-c    rw              none        rw
3   3   default         --      --      none        none
3   3   210000e08b07912c    f6800b-dom-c    rw              none        rw
3   3   210000e08b07d92c    f6800b-dom-c    rw              none        rw
--------------------------------------------------------------------------------------------------------
As it can be seen above, there is no mapping to LUN 0 (i.e., we have 1-3).
Solaris[TM] sees the following:
# cfgadm -al -o show_FCP_dev
Ap_Id                          Type         Receptacle   Occupant     Condition
c4                             fc-private   connected    unconfigured unknown
c5                             fc           connected    unconfigured unknown
c6                             fc-fabric    connected    configured   unknown
c6::20030003ba047ced,0         disk         connected    configured   unknown
c6::20030003ba047ced,1         disk         connected    configured   unknown
c6::20030003ba047ced,2         disk         connected    configured   unknown
c6::20030003ba047ced,3         disk         connected    configured   unknown
c7                             fc-fabric    connected    configured   unknown
c7::20030003ba27cf02,0         disk         connected    configured   unknown
c7::20030003ba27cf02,1         disk         connected    configured   unknown
c7::20030003ba27cf02,2         disk         connected    configured   unknown
c7::20030003ba27cf02,3         disk         connected    configured   unknown
The above output incorrectly shows 4 LUNs (0-3), when there are really only 3 (1-3).
Now, looking at output from the format command:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
/ssm@0,0/pci@1e,700000/pci@1/SUNW,isptwo@4/sd@0,0
1. c3t0d0 <SEAGATE-ST318404LSUN18G-4203 cyl 7506 alt 2 hd 19 sec 248>
/ssm@0,0/pci@1a,700000/pci@1/SUNW,isptwo@4/sd@0,0
2. c6t20030003BA047CEDd0 <drive type unknown>
/ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,0
3. c6t20030003BA047CEDd1 <SUN-T4-0301 cyl 138 alt 2 hd 12 sec 128>
/ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,1
4. c6t20030003BA047CEDd2 <SUN-T4-0301 cyl 51198 alt 2 hd 64 sec 128>
/ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,2
5. c6t20030003BA047CEDd3 <SUN-T4-0301 cyl 62808 alt 2 hd 64 sec 128>
/ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,3
6. c7t20030003BA27CF02d0 <drive type unknown>
/ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,0
7. c7t20030003BA27CF02d1 <SUN-T4-0301 cyl 138 alt 2 hd 12 sec 128>
/ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,1
8. c7t20030003BA27CF02d2 <SUN-T4-0301 cyl 51198 alt 2 hd 64 sec 128>
/ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,2
9. c7t20030003BA27CF02d3 <SUN-T4-0301 cyl 62808 alt 2 hd 64 sec 128>
/ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,3
Specify disk (enter its number):
As can be seen, LUN 0 is <drive type unknown> on both controllers. When this occurs, any commands that access the same bus as the 
phantom LUN0, will be noticeably slow, including the following:
  • the format command, 
  • the boot process, 
  • almost all vxvm commands(if volume manager is installed). 
An example of some bootup errors are (if you have vxvm installed):
T4 claim_device: x83 inquiry failed - I/O errorJul  6 15:39:53 f6800b-dom-c scsi:
WARNING: /ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,0 (ssd25):
Jul  6 15:39:53 f6800b-dom-c    offline
Jul  6 15:39:53 f6800b-dom-c vxvm:vxconfigd: V-5-1-8645 Error in
claiming /dev/rdsk/c6t20030003BA047CEDd0s2 by NR list: I/O error
So, make sure you ALWAYS have a Lun0, otherwise behaviour can be unpredictable.
QUESTION: Can we create a separate LUN0 on a Sun StorEdge SE6x20 array, for each host, like with the Sun StorEdge[TM] 351x array ?
ANSWER: NO, and you don't need to. It's enough for the LUN0 to be there once, even if not all hosts have r/w access to it 
(that is, it is sufficient to have a LUN 0 with permissions set to "none")..As long as LUN0 is on the same bus, then cfgadm can send
commands to the Sun StorEdge SE6x20 array's targets. The LUN will respond to SCSI inquire commands. Not all hosts need to have SCSI R/W access to the LUN0's data area.
This can be significant in Cluster environments, where physically sharing LUNs outside the cluster, can cause problems
due to SCSI Reservation. NOTE: To fix (or avoid) this, it is necessary to have a Lun0, even if this Lun0 is not for this host (that is, the host does not
have permission to R/W). For example: This procedure requires computer downtime, and all i/o quiesced. On the host, unconfigure LUN's # cfgadm -c unconfigure c6::20030003ba047ced # cfgadm -f -c unconfigure c7::20030003ba27cf02 The force option "-f" may be needed in the second one, if you have vxvm or anything else using it. It will complain if you do not force,
and it is in use at the time. NOTE: In both above cases, ALL LUN's on c7, connected to the Sun StorEdge SE6x20 array, will be unconfigured. Note: there is now a Lun 0 on the array : array00:/:<30>lun map rm lun 3 Remove the mapping, are you sure [N]: Y array00:/:<31>lun map add lun 0 slice 0 array00:/:<32>lun map list Lun No Slice No --------------------------- 0 0 1 1 2 2 --------------------------- ** Total 3 entries ** array00:/:<33> array00:/:<34>lun perm lun 0 rw grp f6800b-dom-c Note:The last step (lun perm) is only needed if the host needs to see the LUN. It will all still work fine, if this is excluded.
The important thing is that a LUN0 exists on the same bus. Back to the host: # cfgadm -al -o show_FCP_dev Ap_Id Type Receptacle Occupant Condition c4 fc-private connected unconfigured unknown c5 fc connected unconfigured unknown c6 fc-fabric connected configured unknown c6::20030003ba047ced,0 disk connected configured unknown c6::20030003ba047ced,1 disk connected configured unknown c6::20030003ba047ced,2 disk connected configured unknown c7 fc-fabric connected configured unknown c7::20030003ba27cf02,0 disk connected configured unknown c7::20030003ba27cf02,1 disk connected configured unknown c7::20030003ba27cf02,2 disk connected configured unknown Now the format command looks correct too: # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /ssm@0,0/pci@1e,700000/pci@1/SUNW,isptwo@4/sd@0,0 1. c3t0d0 <SEAGATE-ST318404LSUN18G-4203 cyl 7506 alt 2 hd 19 sec 248> /ssm@0,0/pci@1a,700000/pci@1/SUNW,isptwo@4/sd@0,0 2. c6t20030003BA047CEDd0 <SUN-T4-0301 cyl 34133 alt 2 hd 48 sec 128> /ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,0 3. c6t20030003BA047CEDd1 <SUN-T4-0301 cyl 138 alt 2 hd 12 sec 128> /ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,1 4. c6t20030003BA047CEDd2 <SUN-T4-0301 cyl 51198 alt 2 hd 64 sec 128> /ssm@0,0/pci@1e,700000/SUNW,qlc@2/fp@0,0/ssd@w20030003ba047ced,2 5. c7t20030003BA27CF02d0 <SUN-T4-0301 cyl 34133 alt 2 hd 48 sec 128> /ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,0 6. c7t20030003BA27CF02d1 <SUN-T4-0301 cyl 138 alt 2 hd 12 sec 128> /ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,1 7. c7t20030003BA27CF02d2 <SUN-T4-0301 cyl 51198 alt 2 hd 64 sec 128> /ssm@0,0/pci@1f,700000/SUNW,qlc@1/fp@0,0/ssd@w20030003ba27cf02,2 So, now it is a healthy, supportable system. NOTE: If the permissions for lun 0 are removed(as follows), the system will still be healthy-looking, with no phantom LUN 0. array00:/:<44>lun perm lun 0 none grp f6800b-dom-c array00:/:<45>lun perm list lun slice WWN Group Name Group Perm WWN Perm Effective Perm -------------------------------------------------------------------------------------------------------- 0 0 default -- -- none none 1 1 default -- -- none none 1 1 210000e08b07912c f6800b-dom-c rw none rw 1 1 210000e08b07d92c f6800b-dom-c rw none rw 2 2 default -- -- none none 2 2 210000e08b07912c f6800b-dom-c rw none rw 2 2 210000e08b07d92c f6800b-dom-c rw none rw -------------------------------------------------------------------------------------------------------- # cfgadm -al -o show_FCP_dev Ap_Id Type Receptacle Occupant Condition c4 fc-private connected unconfigured unknown c5 fc connected unconfigured unknown c6 fc-fabric connected configured unknown c6::20030003ba047ced,0 unavailable connected configured unusable c6::20030003ba047ced,1 disk connected configured unknown c6::20030003ba047ced,2 disk connected configured unknown c7 fc-fabric connected configured unknown c7::20030003ba27cf02,0 unavailable connected configured unusable c7::20030003ba27cf02,1 disk connected configured unknown c7::20030003ba27cf02,2 disk connected configured unknown # cfgadm -o unusable_FCP_dev -c unconfigure c7::20030003ba27cf02 # cfgadm -o unusable_FCP_dev -c unconfigure c6::20030003ba047ced # cfgadm -al -o show_FCP_dev Ap_Id Type Receptacle Occupant Condition c4 fc-private connected unconfigured unknown c5 fc connected unconfigured unknown c6 fc-fabric connected configured unknown c6::20030003ba047ced,1 disk connected configured unknown c6::20030003ba047ced,2 disk connected configured unknown c7 fc-fabric connected configured unknown c7::20030003ba27cf02,1 disk connected configured unknown c7::20030003ba27cf02,2 disk connected configured unknown Example of a Sun StorEdge [TM] 351x SAN setup. Comments in Italic , Commands are in BOLD , output of commands is normal text. Below, is the mapping on the Sun StorEdge[TM] 3510 FC(Fibre Channel) array: Note that channel 5 has a target 47 WITH NO LUN0; our host is attached to Channel 5 sccli> show lun-map Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map -------------------------------------------------------------- 0 40 0 ld1 1CF149A6-00 Primary <snip> 5 47 1 ld0 766975E3-01 Primary 5 47 2 ld0 766975E3-02 Primary Below, it can be seen that the Sun StorEdge[TM] 3510 FC Array targets on Channel 5, are represented differently they do NOT show up
as (Disk device). The significant detail is, that the target 47 devices don't show up, only the SCSI Enclosure Services(SES) device
can be seen. # luxadm -e dump_map /devices/ssm@0,0/pci@1a,700000/pci@3/SUNW,qlc@4/fp@0,0:devctl Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 10400 0 216000c0ff801eb2 206000c0ff001eb2 0x0 (Disk device) 1 10500 0 226000c0ff901eb2 206000c0ff001eb2 0xd (SES device) <====== The SES device is significant !! 2 10e00 0 210100e08b24fb0e 200100e08b24fb0e 0x1f (Unknown Type,Host Bus Adapter) When a Sun StorEdge[TM] 3510FC array has a target with no LUN 0 mapped to it, it adheres to the SCSI requirements by presenting an SES
(or other, configurable) device as LUN 0. This can lead to problematic behavior. In the format output, LUN 1 and 2 for target 47 show up. LUN 0 for target 47 is absent, as it should be. # format Searching for disks...done c1t47d1: configured with capacity of 90.82GB c1t47d2: configured with capacity of 90.82GB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /ssm@0,0/pci@1c,700000/pci@1/SUNW,isptwo@4/sd@0,0 1. c1t47d1 <SUN-StorEdge3510-327R cyl 46499 alt 2 hd 64 sec 64> /ssm@0,0/pci@1c,600000/SUNW,qlc@1/fp@0,0/ssd@w226000c0ff901eb2,1 2. c1t47d2 <SUN-StorEdge3510-327R cyl 46499 alt 2 hd 64 sec 64> /ssm@0,0/pci@1c,600000/SUNW,qlc@1/fp@0,0/ssd@w226000c0ff901eb2,2 Now we'll try to contact the array with the management software. # sccli sccli: /dev/es/ses1: device reset detected sccli: /dev/es/ses2: device reset detected sccli: selected device
/dev/rdsk/c1t47d1s2 [SUN StorEdge 3510 SN#001655] sccli>
This works fine so what's the problem of not having LUN 0 ? You may be hitting Bug ID: 4888608 – fixed in Patch <SUNPATCH: 113723-20> cfgadm has trouble with this kind of configuration. See this sample output from a different system ... c4 fc-fabric connected configured unknown c4::210000e08b096a8d unknown connected unconfigured unknown c4::256000c0ffc01377,0 ESI connected unconfigured unknown c4::256000c0ffc01377,1 disk connected unconfigured unknown ... # cfgadm -c configure c4::256000c0ffc01377 cfgadm: Library error: failed to create device node: 256000c0ffc01377: I/O error This can lead to timing problems in SAN environments, waiting for the I/O error to return for the LUN 0 ESI device (which is the SES
device) with every boot or reconfiguration. Typically, these problems are encountered when LUN masking and WWN filtering are in place. Still, these options are necessary to ensure that multiple hosts sharing a Sun StorEdge[TM] 3510FC array, via a SAN environment,
do not access each other's data inadvertently. Sun Cluster 3 for example, has the requirement of needing EXCLUSIVE access to all
its shared LUNs. Sharing a Sun StorEdge[TM] 3510FC array between multiple clusters, or a Sun Cluster, and some non-cluster machines,
will require you to set up LUN security, using WWN filtering. (SAN WWN zoning would be an alternative) Below, is an example configuration where the host is first EXCLUDED from accessing channel 5 , target 47 , LUN 0, by that
LUN being given to another host's WWN exclusively. Later on in the example, a new WWN entry is added, where the host's FC Host Bus Adapter(HBA) WWN is INCLUDED. It will be in the access
filter to one LUN0 on target 47, but not on both. That way, two different hosts can each have their own LUN0 on the same FC target,
but pointing to different pieces of data. Below, is sccli output, where my host's FC WWN shows up on channel 5 target 46, LUN0, (with LD1 mapped to it) , but another host shows
up on channel 5 target 47, LUN0 (with LD0 mapped to it) which prevents my host from seeing that LUN0. sccli> show lun-maps Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map -------------------------------------------------------------- 0 40 1 ld0 766975E3-01 Primary <snip> 5 46 0 ld1 6215C0B5-00 Secondary 210000E08B05455D <comment: target 46, LUN0 , channel 5 has my host's hba WWN in the filter> 5 46 1 ld1 6215C0B5-01 Secondary 5 46 2 ld2 7D429427-00 Secondary 5 47 0 ld0 766975E3-00 Primary 210000E08B05455A <comment: target 47, LUN0 , channel 5 has a DIFFERENT WWN in the filter> On the host : # luxadm -e forcelip /devices/ssm@0,0/pci@1a,700000/pci@3/SUNW,qlc@4/fp@0,0:devctl # luxadm -e dump_map /devices/ssm@0,0/pci@1a,700000/pci@3/SUNW,qlc@4/fp@0,0:devctl Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 10400 0 266000c0ffe01655 206000c0ff001655 0x0 (Disk device) 1 10500 0 266000c0fff01655 206000c0ff001655 0xd (SES device) 2 10e00 0 210000e08b05455d 200000e08b05455d 0x1f (Unknown Type,Host Bus Adapter) The direct-attach host has no trouble with this format, and the sccli command works fine , no need to repeat the output. But, in a SAN configuration, there would be problems. The command cfgadm -c configure c1 would hang for a long time trying to configure this array. Following, is how to set up the Sun StorEdge[TM] 3510FC array in a SAN environment, in such a way, that EVERY host can have its own
LUN0 assigned to it, when using LUN security and WWN filtering. In the example below we will add a FRESH LUN- entry to target 47 , channel 5 that will have my hosts' hba WWN in the LUN filter we will also label this LUN as LUN0, which will make the total of LUN0 mapped to target 47 equal to two. Using LUN filtering on 3510, this is possible as long as there is no 'overlap' in the WWN that are assigned to each LUN0. To do this:
  • select another data partition, 
  • map it to the SAME target 47 on channel 5, but 
  • just give it an INCLUSIVE filter for MY hosts WWN rather than the other one. 
  • 
    
Going into the telnet/'curses' interface to the Sun StorEdge 3510FC array, select:
'view and edit Host luns'
  'CHL5 ID47 (Primary controller)'
    '<select LUN0 slot>'
      'Add host filter entry'
        '<select an entirely different partition>'
Looking at the command output, provided below, channel 5 , target 47 has 2x LUN 0   one has INCLUDED my host in a WWN-filter, 
the other has INCLUDED another host, which is EXCLUDING my host effectively. sccli> show lun-maps Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map -------------------------------------------------------------- 0 40 1 ld0 766975E3-01 Primary sccli> show lun-maps Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map -------------------------------------------------------------- 0 40 1 ld0 766975E3-01 Primary <snip> 5 46 0 ld1 6215C0B5-00 Secondary 210000E08B05455D 5 46 1 ld1 6215C0B5-01 Secondary 5 46 2 ld2 7D429427-00 Secondary 5 47 0 ld0 766975E3-00 Primary 210000E08B05455A 5 47 0 ld0 766975E3-02 Primary 210000E08B05455D Note here that channel 5 shows target 47 twice, BOTH WITH LUN 0. Only the fact that a different WWN filter is used for each LUN0 allows
this. Please also note that although both LUN0's come from the same Logical Drive (LD0) , they come from different partitions
(2 and 0 respectively) out of that Logical Drive. My host's WWN will only give it access to LD0, partition 766975E3-00. # luxadm -e forcelip /devices/ssm@0,0/pci@1a,700000/pci@3/SUNW,qlc@4/fp@0,0:devctl # luxadm -e dump_map /devices/ssm@0,0/pci@1a,700000/pci@3/SUNW,qlc@4/fp@0,0:devctl Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 10400 0 266000c0ffe01655 206000c0ff001655 0x0 (Disk device) 1 10500 0 266000c0fff01655 206000c0ff001655 0x0 (Disk device) 2 10e00 0 210000e08b05455d 200000e08b05455d 0x1f (Unknown Type,Host Bus Adapter) Suddenly we see that one of the LUN0 SES devices has changed into a (Disk Device) Let's check format : bash-2.03# format Searching for disks...done c1t46d1: configured with capacity of 68.36GB c1t46d2: configured with capacity of 68.11GB c1t47d0: configured with capacity of 90.82GB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /ssm@0,0/pci@1c,700000/pci@1/SUNW,isptwo@4/sd@0,0 <snip> 3. c1t47d0 <SUN-StorEdge3510-327R cyl 46498 alt 2 hd 64 sec 64> /ssm@0,0/pci@1c,600000/SUNW,qlc@1/fp@0,0/ssd@w266000c0fff01655,0 Specify disk (enter its number): Despite the fact that when we looked at the sccli show lun-maps output from the array, where we could see 2x LUN0 on target 47 ,
my host sees only one LUN 0 this is the way to ensure that every host has a LUN 0 on every target when LUN filtering is used.


Product
Sun StorageTek SAN Foundation 4.4 Software
Sun StorageTek T3+/6X20 Controller Firmware 3.1
Sun StorageTek 3510 FC Array
Sun StorageTek 6320 System
Sun StorageTek 6120 Array
Sun StorageTek 6020 Array
Sun StorageTek T3 Array
Sun StorageTek 6920 System
Sun StorageTek 3511 SATA Array

Internal Comments
This document contains normalized content and is managed by the the Domain Lead(s) of the respective domains. To notify content owners of a knowledge gap contained in this document, and/or prior to updating this document, please contact the domain engineers that are managing this document via the “Document Feedback” alias(es) listed below:

[email protected]

The Knowledge Work Queue for this article is KNO-STO-SAN.

Thisdocument is basically a collection of information from some
bugs in sunsolve such as:




Bug id
4296354, 4690602
 and for SE3510FC bug ID 4888608.


As well as extensive testing in the lab for above example.


The following site has information on scsi standards.



http://www.scsilibrary.com/standards.html


Co-Authored By: Michiel Bijlsma & Ahmad Ghanawi


LUN0, phantom, maserati, x83 inquiry failed, se6320, 6320, 6120, se6120, 6x20, se6120, t4, storage, lun 0, t3+, t3b, san, scsi, minnow, se3510, 3510fc, 3510, 3511, wwn, wwn-filter, filter, security, lun0, logical drive, ld0, cfgadm, sccli, audited, normalized
Previously Published As
77291

Change History


Date: 2009-11-18
User Name: 84789
Action: Reviewed
Comment: Reviewed

Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback