Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1011305.1
Update Date:2010-10-28
Keywords:

Solution Type  Technical Instruction Sure

Solution  1011305.1 :   Sun StorEdge[TM] 6920: Remote Replication Fast Start Procedure with Snapshots for Solaris[TM] Operating Systems Volumes  


Related Items
  • Sun Storage 6920 System
  •  
Related Categories
  • GCS>Sun Microsystems>Storage - Disk>Modular Disk - 6xxx Arrays
  •  

PreviouslyPublishedAs
215512


Description
Fast Start may be needed because the amount of data you have to initially synchronize is large, the amount of networking bandwidth between Sun StorEdge[TM] 6920's is too small, and the amount of downtime the customer can afford to have is also small.

This is a supplemental procedure to the one provided on page 124 of the Best Practices for the Sun StorEdge 6920 System, Version 3.0.1. It is intended to introduce the use of snapshot volumes to minimize the amount of downtime required at the primary site to make a consistent raw copy of the primary volume's data.

This is centered around Solaris(TM), but can be adapted to other operating systems that provide a method to perform raw block level backups to a removable media device.

This will require:

  • root user access for local and remote Solaris Data hosts
  • these data hosts must be attached to removable media storage that can hold the volume(s) being replicated.
  • replication sets are already configured. Reference document <Document: 1005617.1>  Validating Remote Replication Set Creation for a Sun StorEdge[TM] 6920


Steps to Follow
Actions on PRIMARY Peer

1. Quiesce all I/O to Volumes from host(s)

Stop Applications
Unmount filesystems from host(s) to validate I/O quiesced

This is required to get a consistent primary volume image for dump to tape

2. Suspend with Fast Start all volumes quiesced from previous step

SSCS

sscs modify -c -T repset <repset_name>

example:

sscs modify -c -T repset primaryvol/1

BUI

  1. Click Configuration Service or Common Array Manager link.
  2. Click Logical Storage Tab or Menu tree
  3. Click Replication Sets Tab or Menu tree
  4. Click Repset name
  5. Click Suspend button
  6. Select Fast Start in the popup
  7. Click OK

This can be done for a set in ANY state, even Suspended

3. Create Snapshots (and Snapshot Reserve Space if needed)

Reference document <Document: 1011356.1>  Validating Sun Storedge[TM] 6920 Snapshot Creation

4. Get Snaphot WWN

Reference document <Document: 1004359.1>  Validating Sun Storedge[TM] 6920 Snapshot Details and State

5. Map Snapshot to Primary site Data host

Reference document  <Document: 1008465.1>  Validating Volume and Initiator LUN Mapping Creation on a Sun StorEdge[TM] 6920

Actions on SECONDARY Peer

6. Suspend with Fast Start on Secondary

Reference Step 2

Actions on Primary Peer

7. Resume Replication with Normal Option

Reference Step B of  <Document: 1007158.1>  Validating Sun StorEdge[TM] 6920 Remote Replication Modification

8. Verify all volumes involved are transitioned to "Replicating" State

Reference <Document: 1007126.1> Validating Sun Storedge[TM] 6920 Remote Replication details and state

9. Suspend with Fast Start again

Reference Step 2

10. Resume I/O to all Primary volume(s)

Mount filesystems
Restart Applications

11. Log in to the Local Solaris host to access the snap(s) for dd(1M)

Use the WWN from Step 4 and the LUN number from Step 5 for the snapshot volume, to identify the raw snapshot device.

Reference document  <Document: 1009557.1>  Troubleshooting Fibre Channel Devices from the OS

12. Prep Tape drive(s)

o Display status of tape drive
mt -f <tape device> status
o Rewind tape
mt -f <tape device> rewind

13. Backup data from the raw snapshot LUN to tape

dd if=/dev/rdsk/<c#t#d#s2> of=/dev/rmt/<unit number><density>[<no rewind>]
density = l, m, h, u/c (low, medium, high, ultra/compressed, respectively)
## Note: use a larger than default block size for better performance with dd
o Eject Tape
mt -f <tape device> eject

12. Move tape (or dd(1M) image) to remote site or remote host

Actions on SECONDARY Peer

13. Map Secondary Volume to a host Initiator

Reference document <Document: 1008465.1>  Validating Volume and Initiator LUN Mapping Creation on a Sun StorEdge[TM] 6920

You will need to be aware of volume details of the secondary volume('''See Step 4), and handling Solaris(TM), reference document <Document: 1009557.1>    Troubleshooting Fibre Channel Devices from the OS

14. Prepare the Secondary Volume for dd(1M) restore

  • Perform this step for Solaris Volumes
  • "Zero out" secondary volume to remove anything remaining

## WARNING - this will (obviously) destroy all data on the secondary volume!

dd if=/dev/zero of=/dev/rdsk/<device>s2 bs=512 count=2

Results should show:
2+0 records in
2+0 records out

15. create a temporary VTOC on the secondary volume.

  • Apply temporary label using format(1M) to the disk that will be overwritten
## if you do not apply a temporary label you will get the following:
## /dev/rdsk/<device>s2: Cannot read VTOC
## when applying the primary's VTOC to the secondary volume
format
Select the volume
  • When asked, "Disk not labeled. Label it now?"
Select
y

Actions on PRIMARY Peer

16. Take the VTOC *from* the primary volume, apply to the secondary volume.

prtvtoc /dev/rdsk/<device>s2 <file>

Transfer the <file> to remote site or secondary host

Actions on SECONDARY Peer

  • Overwrite the secondary volume's temporary vtoc with the primary vtoc
## even though the VTOC will be over written for a third time with dd,
## this step will define the correct disk geometry for dd(1M) to use
fmthard -s <vtoc file from primary volume> /dev/rdsk/<device>s2
  • Results from the fmthard command should show:
    fmthard: New volume table of contents now in place.

17. Restore tape (or dd(1M) image) at Remote Site

## Note the following:
##
## this step also sets the bits in the bitmap
##
## If dd fails while attempting to dd "out" with:
## write: I/O error
## 1+0 records in
## 1+0 records out
## The "sync needed" flag may not have transfered during step 5
##
o Insert Tape in Tape Drive of "Remote" FC attached Solaris host
o Rewind Tape
mt -f <tape device> rewind
o Restore for data from the tape to the raw LUN snapshot
dd if=/dev/rmt/<unit number><density>[<no rewind>] of=/dev/rdsk/<c#t#d#s2>
density = l, m, h, u/c (low, medium, high, ultra/compressed, respectively)

18. Suspend with Fast Start all volumes involved

Reference Step 2

19. Unmap the Secondary Volume(s)

Reference document  <Document: 1005606.1>   Validating Mapping Deletion for Volumes and Initiators for a Sun StorEdge[TM] 6290

Actions on PRIMARY Peer

20. Resume Replication with Normal Option

Reference document <Document: 1007158.1> Validating Sun StorEdge[TM] 6920 Remote Replication Modification



Product
Sun StorageTek 6920 System
Sun StorageTek 6920 Maintenance Update 2
Sun StorageTek 6920 Maintenance Update 1

Internal Comments
This document contains normalized content and is managed by the the Domain Lead(s) of the respective domains. To notify content owners of a knowledge gap contained in this document, and/or prior to updating this document, please contact the domain engineers that are managing this document via the “Document Feedback” alias(es) listed below:

[email protected]

The Knowledge Work Queue for this article is KNO-STO-MIDRANGE_DISK.

6920, Remote Replication, 6920 Remote Replication, Fast Start, Replication, Snap, Snaphot, Suspend, Modify, Repset, latent, normalized, audited
Previously Published As
86604

Change History
Publishing Information
Date: 2007-09-26
User Name: 97961
Action: Approved
Comment: Publishing. No further edits required.
Version: 6

Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback