Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1007965.1
Update Date:2008-12-30
Keywords:

Solution Type  Technical Instruction Sure

Solution  1007965.1 :   Sun StorEdge[TM] 6130: Overview of configuration via sscs for Solaris data hosts  


Related Items
  • Sun Storage 6130 Array
  •  
Related Categories
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - 6xxx Arrays
  •  

PreviouslyPublishedAs
210985


Description
The following topics outline the setup procedure for Sun StorEdge[TM] 6130:


1) Verify/Create a Profile
2) Verify/Create array Host Group
3) Verify/Create array Hosts
4) Determine the WWNs of your initiators
5) Verify/Create array Initiators
6) Verify/Create Pools
7) Verify/Create Volumes
8) Map the Volumes
9) Get host system to recognize new LUNs



Steps to Follow
Sun StorEdge[TM] 6130: Overview of configuration via sscs for Solaris data hosts.

Setting up the Sun StorEdge 6130

Step 1: Verify/Create a Profile.

The Sun StorEdge 6130 array provides several storage profiles, listed below, that meet most storage configuration requirements. If the default storage profile does not meet your performance needs, you can choose one of several other predefined profiles, or you can create a custom profile.

# ./sscs list -a storage-name-6130 profile 
Profile: Oracle_OLTP_HA
Profile: Oracle_DSS
Profile: High_Performance_Computing
Profile: Random_1
Profile: Sequential
Profile: Sybase_OLTP_HA
Profile: Sybase_DSS
Profile: Mail_Spooling
Profile: Oracle_OLTP
Profile: Sybase_OLTP
Profile: Default
Profile: NFS_Mirroring
Profile: NFS_Striping
Profile: High_Capacity_Computing
** Options ** :
-a arrayname
-r raid level
-s segment size
-h readahead
-n number of disks
-D disk Type 
# ./sscs create -a storage-name-6130 -r 5 -s 32K -h off -n 4 -D FC -d "profile with 32k and raid 5" profile profile-32k-r5
# ./sscs list -a storage-name-6130 profile   
Profile: Oracle_OLTP_HA
Profile: Oracle_DSS
Profile: High_Performance_Computing
Profile: Random_1
Profile: Sequential
Profile: Sybase_OLTP_HA
Profile: Sybase_DSS
Profile: Mail_Spooling
Profile: Oracle_OLTP
Profile: Sybase_OLTP
Profile: Default
Profile: NFS_Mirroring
Profile: NFS_Striping
Profile: profile-32k-r5
Profile: High_Capacity_Computing
# ./sscs list -a storage-name-6130 profile profile-32k-r5
Profile: profile-32k-r5
Description:              profile with 32k and raid 5
RAID Level:               5
Segment Size:             32K
Readahead:                Off
Optimal Number of Disks:  4
Disk Type:                FC
Profile in Use:           No
Factory Profile:          No

Step 2: Verify/Create array Host Group.

Host groups enable you to designate a collection of hosts that will share access to a volume. You can map volumes to a host group or to individual hosts that have a logical unit number (LUN).

Create the Host group : sunfire

** Options ** :

create <-a|--array > hostgroup

# ./sscs create -a storage-name-6130 hostgroup sunfire
# ./sscs list -a storage-name-6130 hostgroup     

Host Group: sunfire

Step 3: Verify/Create array Hosts.

Create the array host : V1280

** Options ** :

create <-a|--array >[-g|--hostgroup ] host

# ./sscs create -a storage-name-6130 host V1280
# ./sscs list -a storage-name-6130 host

Host: V1280

Step 4: Determine the WWNs of your initiators.

Get information about the host's HBAs by using luxadm -e:

# luxadm -e port 
Found path to 2 HBA ports
/devices/ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0:devctl            CONNECTED
/devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl            CONNECTED
# luxadm -e dump_map /devices/ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    10000   0         210000e08b18e328 200000e08b18e328 0x1f (Unknown Type)
1    10200   0         50020f23000102d6 50020f20000102d6 0x0  (Disk device)
2    103e4   100e4     50020f2300008c09 50020f2000008c09 0x0  (Disk device)
3    10100   0         210000e08b18e429 200000e08b18e429 0x1f (Unknown Type,Host Bus Adapter)
# luxadm -e dump_map /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    10000   0         210000e08b185c2a 200000e08b185c2a 0x1f (Unknown Type)
1    10200   0         50020f230000ffd9 50020f200000ffd9 0x0  (Disk device)
2    103e8   100e8     50020f2300008eec 50020f2000008eec 0x0  (Disk device)
3    10100   0         210000e08b18b227 200000e08b18b227 0x1f (Unknown Type,Host Bus Adapter)

Note: A port WWN is unique to an individual port, whereas the node WWN is unique to the node. (A node in network terminology is a device -- a server or storage device.)

Note: Notice that the output is displaying all the HBAs that can be seen in the SAN environment from one HBA. The local HBA is identified by the text, "Host Bus Adapter" in the Type field, while the remote HBA is simply "Unknown Type."

Note: For additional methods of determining the WWNs for HBAs, refer to Technical Instruction <Document: 1003497.1> .

Step 5: Verify/Create array Initiators.

To make storage available to a data host or host group, you create an initiator and associate it with a volume. An initiator is an FC port that is identified by a unique port worldwide name (Port WWN) of a host bus adapter (HBA) installed on the data host.

Create an initiator for each HBA

# ./sscs create -a storage-name-6130 -h V1280 -w 210000e08b18e429 -o solaris initiator V1280-qlc@1
# ./sscs create -a storage-name-6130 -h V1280 -w 210000e08b18b227 -o solaris initiator V1280-qlc@2
# ./sscs list -a storage-name-6130 initiator

Initiator: V1280-qlc@1
Initiator: V1280-qlc@2

Note: The "-o solaris" specifies that the initiator is for Solaris with Traffic Manager. To specify Solaris using Veritas DMP for path management, use "-o solaris_dmp"

Step 6: Verify/Create Pools.

A storage pool is a collection of volumes with the same configuration.

# ./sscs list -a storage-name-6130 pool
Pool: Default  Profile: Default  Configured Capacity: 0.000 MB
** Options ** :

Verify/Create Pools

create <-a|--array >
<-p|--profile >
[-d|--description ]
pool 
# ./sscs create -a storage-name-6130 -p profile-32k-r5 -d "Raid 5 with 32K " pool pool-32k-r5
# ./sscs list -a storage-name-6130 pool

Pool: pool-32k-r5 Profile: profile-32k-r5 Configured Capacity: 0.000 MB

# ./sscs list -a storage-name-6130 pool pool-32k-r5
Pool: pool-32k-r5
Description:          Raid 5 with 32K
Profile:              profile-32k-r5
Total Capacity:       392.197 GB
Configured Capacity:  0.000 MB
Available Capacity:   392.197 GB

Step 7: Verify/Create Volume.

A volume is created from virtual disks that are part of a storage pool. Based on your selections, the array automatically allocates storage from different disks to meet your volume configuration requirements.

# ./sscs list -a storage-name-6130 volume 
# ./sscs create -a storage-name-6130 -p pool-32k-r5 -s 10GB volume vol0-32k-r5
# ./sscs list -a storage-name-6130 jobs
Job ID: VOL:0B60715230F6  Status: In progress
# ./sscs list -a storage-name-6130 volume
Volume: vol0-32k-r5  Type: Standard  Pool: pool-32k-r5  Profile: profile-32k-r5
# ./sscs list -a storage-name-6130 volume vol0-32k-r5
Volume: vol0-32k-r5
Type:                            Standard
WWN:                             60:0A:0B:80:00:13:B9:8B:00:00:0B:60:71:52:30:F6
Pool:                            pool-32k-r5
Profile:                         profile-32k-r5
Virtual Disk:                    1
Size:                            10.000 GB
Status:                          Online
Action:                          Ready
Condition:                       Optimal
Read Only:                       No
Controller:                      A
Preferred Controller:            A
Modification Priority:           High
Write Cache:                     Enabled
Write Cache with Mirroring:      Enabled
Write Cache without Batteries:   Disabled
Flush Cache After:               10 Sec
Disk Scrubbing:                  Enabled
Disk Scrubbing with Redundancy:  Disabled

Additonal info:

As we can see from the above output the volume is created on Virtual Disk '1'.
To list the physical disks used in this Virtual Disk '1' then use the following command:

# ./sscs list -a storage-name-6130 vdisk 1

Step 8: Map the Volume.

# ./sscs list -a storage-name-6130 host
Host: V1280
# ./sscs map -a storage-name-6130 -h V1280 -l 1 volume vol0-32k-r5
# ./sscs list -a storage-name-6130 volume vol0-32k-r5
Volume: vol0-32k-r5
Type:                            Standard
WWN:                             60:0A:0B:80:00:13:B9:8B:00:00:0B:60:71:52:30:F6
Pool:                            pool-32k-r5
Profile:                         profile-32k-r5
Virtual Disk:                    1
Size:                            10.000 GB
Status:                          Online
Action:                          Ready
Condition:                       Optimal
Read Only:                       No
Controller:                      A
Preferred Controller:            A
Modification Priority:           High
Write Cache:                     Enabled
Write Cache with Mirroring:      Enabled
Write Cache without Batteries:   Disabled
Flush Cache After:               10 Sec
Disk Scrubbing:                  Enabled
Disk Scrubbing with Redundancy:  Disabled
Associations:
Host: V1280  LUN: 1  Initiator: V1280-qlc@1  WWN: 21:00:00:E0:8B:18:e4:29
Host: V1280  LUN: 1  Initiator: V1280-qlc@2  WWN: 21:00:00:E0:8B:18:b2:27

Step 9: Get host system to recognize the new LUN.

9a) Verifying that hosts are connected.

# luxadm -e port

Found path to 2 HBA ports

/devices/ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0:devctl            CONNECTED
/devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl            CONNECTED

For SAN attached storage go to step 9b and for directly connected arrays go to step 9c.

9b) For SAN attached storage:

Display the state of the devices attached to the HBA environment, use
the cfgadm command:

# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c3                             fc-fabric    connected    unconfigured   unknown
c3::210000e08b18e328           unknown      connected    unconfigured   unknown
c3::50020f2300008c09           disk         connected    unconfigured   unknown
c3::50020f23000102d6           disk         connected    unconfigured   unknown
c4                             fc-fabric    connected    unconfigured   unknown
c4::210000e08b185c2a           unknown      connected    unconfigured   unknown
c4::50020f2300008eec           disk         connected    unconfigured   unknown
c4::50020f230000ffd9           disk         connected    unconfigured   unknown

After you identify devices in the environment by using the cfgadm command,
you can configure them to provide the resource to the Solaris, as shown
below:

# cfgadm -c configure c3 c4
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c3                             fc-fabric    connected    configured     unknown
c3::210000e08b18e328           unknown      connected    unconfigured   unknown
c3::50020f2300008c09           disk         connected    configured     unknown
c3::50020f23000102d6           disk         connected    configured     unknown
c4                             fc-fabric    connected    configured     unknown
c4::210000e08b185c2a           unknown      connected    unconfigured   unknown
c4::50020f2300008eec           disk         connected    configured     unknown
c4::50020f230000ffd9           disk         connected    configured     unknown

Now all of the volumes should show up using the format command.
You may also use the following command to display all the 6130 volumes:

# cfgadm -al -o show_FCP_dev

9c) For directly attached arrays:

Use devfsadm to provide access to the Sun StorEdge 6130 volumes to the Solaris host:

# devfsadm

You may use format to verify that the disks have been added.



Product
Sun StorageTek 6130 Array
Sun StorageTek 6130 Array (SATA)

6130, treefrog, configure, configuration, setup, install, installation, sscs command
Previously Published As
81921

Change History
Date: 2007-01-24
User Name: 97961
Action: Approved
Comment: Publishing. No further edits required.

Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback