Sun Microsystems, Inc.  Sun System Handbook - ISO 3.4 June 2011 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-77-1001061.1
Update Date:2011-02-22
Keywords:

Solution Type  Sun Alert Sure

Solution  1001061.1 :   Performance Degradation Reported in Controller Firmware Releases 4.1x on Sun StorEdge 3310/351x Arrays for All RAIDTypes and Certain Patterns of I/O  


Related Items
  • Sun Storage 3510 FC Array
  •  
  • Sun Storage 3310 Array
  •  
  • Sun Storage 3511 SATA Array
  •  
Related Categories
  • GCS>Sun Microsystems>Sun Alert>Criteria Category>Availability
  •  
  • GCS>Sun Microsystems>Sun Alert>Release Phase>Resolved
  •  

PreviouslyPublishedAs
201388


Product
Sun StorageTek 3310 SCSI Array
Sun StorageTek 3510 FC Array
Sun StorageTek 3511 SATA Array

Bug Id
<SUNBUG: 6246969>, <SUNBUG: 6341196>

Date of Workaround Release
12-JAN-2006

Date of Resolved Release
14-MAR-2007

Performance Degradation Reported in Controller Firmware Releases 4.1x on Sun StorEdge 3310/351x Arrays for All RAIDTypes and Certain Patterns of I/O

Impact

On Sun StorEdge 3310/351x Arrays, performance degradation has been reported for all RAID types and for certain patterns of I/O, due to the enhancement of the data integrity checking mechanisms in controller firmware release 4.1x.

Note: Depending on the application and usage, some systems may not experience a performance impact.


Contributing Factors

This issue can occur on the following platforms:

  • Sun StorEdge 3310 array with firmware revision 4.13b or higher
  • Sun StorEdge 3510 array with firmware revision 4.11i or higher
  • Sun StorEdge 3511 array with firmware revision 4.11i or higher

A latency was introduced into the 4.1x controller firmware code due to the added overhead of a data integrity enhancement which performs extended cache data error checking. This added enhancement has additional command overhead which adds a latency for both read and write commands in the controller firmware.


Symptoms

SE3310, SE3510 and SE3511 systems may experience performance degradation of varying degrees on systems running revisions 4.1x of the controller firmware.


Workaround

Depending on your specific configuration or environment, one option is to tune applications and host driver stacks to issue the largest reads and maintain the highest queue depths possible. (There may be other tuning options available for your specific configuration).


Resolution

Please refer to the "Sun StorEdge 3510 FC and Sun StorEdge 3511 SATA Array Release Notes" at:

http://onesearch.sun.com/search/docs/index.jsp?col=docs_en&locale=en&qt=817-6597

and the section:

"Performance Implications of Migrating to Firmware Version 4.1x"

to determine whether this issue may affect your specific systems configurations or firmware upgrade, and what options may be available.


This Sun Alert notification is being provided to you on an "AS IS" basis. This Sun Alert notification may contain information provided by third parties. The issues described in this Sun Alert notification may or may not impact your system(s). Sun makes no representations, warranties, or guarantees as to the information contained herein. ANY AND ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE HEREBY DISCLAIMED. BY ACCESSING THIS DOCUMENT YOU ACKNOWLEDGE THAT SUN SHALL IN NO EVENT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES THAT ARISE OUT OF YOUR USE OR FAILURE TO USE THE INFORMATION CONTAINED HEREIN. This Sun Alert notification contains Sun proprietary and confidential information. It is being provided to you pursuant to the provisions of your agreement to purchase services from Sun, or, if you do not have such an agreement, the Sun.com Terms of Use. This Sun Alert notification may only be used for the purposes contemplated by these agreements.

Copyright 2000-2008 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 U.S.A. All rights reserved.


Modification History
16-Jun-2008; Updated Resolution section for correct URL to Support Docs

Date: 26-JAN-2006

26-Jan-2006:

  • Updated Contributing Factors section

Date: 01-AUG-2006

01-Aug-2006:

  • Updated Relief/Workaround section

Date: 14-MAR-2007
  • Updated Relief/Workaround and Resolution sections


Previously Published As
102127
Internal Comments


14-Mar-2007: After thorough analysis, it has been determined there will be no firmware changes made to address this issue.



The following Sun Alerts have information about other known issues for the 3000 series products:



102011 - Sun StorEdge 33x0/3510 Arrays May Report a Higher Incidence of Drive Failures With Firmware 4.1x SMART Feature Enabled



102067 - Sun Cluster 3.x Nodes May Panic Upon Controller Failure/Replacement Within Sun StorEdge 3510/3511 Arrays



102086 - Failed Controller Condition May Cause Data Integrity Issues



102098 - Insufficient Information for Recovery From Double Drive Failure for Sun StorEdge 33x0/35xx Arrays



102126 - Recovery Behavior From Fatal Drive Failure May Lead to Data Integrity Issues



102127 - Performance Degradation Reported in Controller Firmware Releases 4.1x on Sun StorEdge 3310/351x Arrays for All RAID Types and Certain Patterns of I/O



102128 - Data Inconsistencies May Occur When Persistent SCSI Parity Errors are Generated Between the Host and the SE33x0 Array



102129 - Disks May be Marked as Bad Without Explanation After "Drive Failure," "Media Scan Failed" or "Clone Failed" Events



Note: One or more of the above Sun Alerts may require a Sun Spectrum Support Contract to login to a SunSolve Online account.



--------------------------------------------------------------------------



This bug was initially filed for RAID 1 only but the performance impact has been seen for ALL the LDs regardless of the RAID type being used.



BugID 6341196 was filed to address the 4.1x release notes which do not mention the performance degradation in 4.1x vs. 3.2x version 817-6597-15.



A latency was introduced into the 4.1x controller firmware code related to the added overhead of data integrity enhancements which perform extended cache data error checking for the integrity and status of data. This added enhancement has additional command overhead which adds a latency for both read and write commands in the controller firmware.



The 4.1x release adds a routine to check the accuracy of the buffers after each command which contributes to the additional overhead.



If a customer experiences severe performance degradation please escalate to Sun backline technical support.



Updated information regarding the following performance statistics can be found at http://pts-storage.west/products/SE33xx/SunAlert102127PerformaceData.html



SE3510 comparison testing was conducted between revisions 3.27R and 4.13C of the controller firmware in the following configurations:



Cache Optimization Mode - Sequential




  • dual controller 3510 array


  • 2 Logical Drives (6x36 GB drives per LD )


  • each LD assigned to one controller


  • stripe size: 128KB


  • optimization mode: sequential


  • All other settings were the 4.13 default values


  • Raid 1 and Raid 5



With various types of data:




  • read vs write,


  • random vs sequential,


  • Number of I/O threads (queues) : 1, 2, 4, 8, 16, 32,


  • Host I/O block Size: 1K, 4K, 16K, 64K, 256K and 1M.



Be advised that these results were generating in a static lab environment and might not be indicative of "real world" application utilization.



Test results are as follows:



Sequential Read I/O



Raid 5:



-Performance degradation (25-45%) seen with small block I/O's(1-4K) for all ranges of I/O threading (worst performance seen for low number of I/O threads).



-Performance degradation (25-30%) seen with larger block I/O's (greater than 16K) and a low number of I/O threads (less than 4).



-Lower degradation (less than 10%) seen in performance with large block I/O's (greater than 16K) and a greater number of I/O threads (4 or more)



Raid 1:



-Performance degradation (10-50%) seen with small block I/O's (1-4K) for all ranges of I/O threading (worst performance seen for low numberof I/O threads).



-Performance improvement (5 - 35%) seen with large block I/O's (16K or greater) and a greater number of I/O threads (4 or more).



-Performance degradation (20-40%) seen for large block I/O's (less than 16K) with a small number of I/O threads (less than 4).



Random Read I/O



Raid 5:



-Low Performance degradation (less than 9%) for all block sizes and various number of I/O threads.



Raid 1:



-Low performance degradation (0-10%) for block I/O's less than 64K in size, and various number of I/O threads.



-Performance degradation (10-20%) seen with large block I/O's (64k and greater) with some correlation seen with the increase in the number of I/O threads (i.e. the worst performance (20%) seen with 32 threads).



Sequential Write I/O



Raid 5:



-Performance improvement (2-150%) seen for smaller block I/O's (4k or less) in size, with very high improvements (150-275%) for all multi-threaded I/O (2-32 I/O threads).



-Some performance degradation (0-20%) seen for single threaded I/O and block I/O's (16K or greater) in size.



Raid 1:



-Performance improvement seen for smaller block I/O's (4K or less) with very high improvement (greater than 100%) if the I/O's are multi-threaded (2 -32 I/O threads).



-Performance degradation (~10%- 20%) seen for I/O's with larger blocksize (16K or more) for all ranges of I/O threading.



Random Write I/O



Raid 5:



-Performance improvement (5%- 50%) seen for I/O's which are multi-threaded (4 or greater), with most blocksizes (16K or greater).



-Performance improvement (~0%-30%) seen for multi-threaded I/O's(8 or greater), and smaller blocksize I/O (4k or less).



-Performance degradation (5%-20%) seen for lower multi-threaded I/O's (less than 4 threads).



Raid 1:



-Performance improvement (5%-125%) seen for I/O's which are multi-threaded (8 or greater threads) with larger blocksizes (16K or greater).



-Performance degradation (5% to 20%) seen for lower multi-threaded I/O (less than 4 threads) and all block sizes (1K or greater).



 



SE3510 comparison testing was conducted between revisions 3.27R and 4.13C of the controller firmware in the following configurations:



Cache Optimization Mode - Random




  • dual controller 3510 array


  • 2 Logical Drives (6x36 GB drives per LD )


  • each LD assigned to one controller,


  • stripe size: 32KB


  • optimization mode: random


  • All other settings were the 4.13 default values (with Media scan off)


  • Raid 1 and Raid 5



With various types of data:




  • read vs write


  • random vs sequential


  • Host I/O block sizes: 1K, 4K, 16K, 64K, 256K, and 1,


  • Number of I/O threads (queues): 1,2,4,8,16,32



Sequential Read I/O:



Raid 5:



-Overall performance degradation (13-50%) seen with various block size I/O's (1K and above) for all ranges of I/O threads.



-Worst performance (33-50%) seen with various block I/O's (16K and below) with all ranges of I/O threads.



-Lower degradation (13-20%) seen in performance with large block I/O's (256K and above) with a greater number of I/O threads (4 or more). But, performance degradation (15-32%) seen when I/O threads are lower (2 and below).



Raid 1:



-Performance degradation (35-52%) seen with small block I/O's (1-4K) for all ranges of I/O threads.



-Performance degradation (12 - 47%) seen with large block I/O's (16k or above) for all ranges of I/O threads.



Random Read I/O:



Raid 5:



-Low Performance degradation (less than 9%) seen for various block size I/O's (64K and less) with all ranges of I/O threads.



-Performance improvement (15-172%) seen for larger block size I/O's (256K and above) with all ranges of I/O threads. Great performance (more than 100%) seen for 256K block size, multi-threaded I/O's (2 or more).



Raid 1:



-Low Performance degradation (1-13%) seen for smaller block I/O's (16K or less) with various number of I/O threads.



-Performance degradation (0-35%) seen for block I/O's (64K and 1M) with various number of I/O threads. But, slight improvement (6.6%) seen for 1M block, single threaded I/O's.



-Performance improvement (5-38%) seen for 256K block I/O's with various number of I/O threads. But, slight degradation (2%) with 16 threads I/O's.



Sequential Write I/O:



Raid 5:



-Performance improvement (11-182%) seen for smaller block I/O's (1k) in size, with very high improvements (117-182%) seen for multi-threaded I/O (16 or above I/O threads) with exception of performance degradation (9%) seen when I/O is single threaded.



-performance degradation (9-36%) seen for various block I/O's (4K, 16K and 64K) with all ranges of I/O threads except slight improvement (2%) on 64K blocks I/O with 32 threads.



-Performance degradation (4-36%) seen with higher block size I/O's (256K or above) with low multi-threaded I/O (4 or less threads) and slight improvement (2-10%) with multi-threaded I/O's (8 or above).



Raid 1:



-Performance improvement seen for smaller block I/O's (1K) with very high improvement (greater than 100%) seen if the I/O's are multi-threaded (16-32 I/O threads). But, slight performance degradation (9%) with 1K block size, single threaded I/O.



-Performance degradation (10%- 39%) seen for I/O's with various blocksize (4K or more) for all ranges of I/O threading.



Random Write I/O:



Raid 5:



-Performance improvement (5%- 72%) seen for I/O's which are multi-threaded (4 or greater) with most blocksizes (4K or greater). Performance improvement (8%- 42%) seen with 1K block size, multi-threaded (8 or greater) I/O's.



-Performance degradation (4-28%) seen for lower threaded I/O's(2 or less), with most blocksizes (1K to 1M).



Raid 1:



-Performance improvement (10% to 102%) seen for multi-threaded I/O's (8 or greater threads) with various blocksizes (4K,16K and 64K).



-Performance degradation (6% to 41%) seen for various blocksizes (1k, 256K and 1M) with all range of I/O threads with exception of slight improvement (3.24%) seen with 1K block size, multi-threaded I/O (32 threads).



-Performance degradation (13% to 41%) seen for lower multi-threaded I/O (4 or less threads) with all block sizes (1K or greater) with exception of slight improvement (3.4%) seen with 64K block size, 4 threads I/O's.



 



Testing for the SE3310



SE3310 comparison testing was conducted between revisions 3.25S and 4.13B of the controller firmware in the following configurations:



Cache Optimization Mode - Sequential




  • dual controller 3310 array


  • 2 Logical Drives (6x72 GB drives per LD )


  • each LD assigned to one controller


  • stripe size: 128KB


  • optimization mode: sequential


  • All other settings were the 4.13 default values


  • Raid 1 and Raid 5



With various types of data:




  • read vs write,


  • random vs sequential I/O,


  • Number of IO threads (queues) : 1, 2, 4, 8, 16, 32


  • Host IO block Size: 1K, 4K, 16K, 64K, 256K and 1M



Sequential Read I/O



Raid 1:



-Performance degradation (20-40%) seen with small block I/O's(1-16K) and all ranges of I/O threading.



-Minimal degradation (0-2%) in performance seen with large block I/O's (64K or greater) and all ranges of I/O threading.



Random Read I/O



Raid 5:



-Low Performance degradation (less than 9%) seen for all block sizes and all ranges of I/O threads.



Sequential Write I/O



Sequential Writes



Raid 5:



-Performance degradation (4-33%) seen for all blocksizes with low multi-threading (less than 4 threads).



-Performance degradation (12-25%) seen for larger block I/O's (16K-256K) in size regardless of multi-threading.



-Performance improvement (3-80%) seen for smaller block I/O's (4k or less) in size, or very large block I/O's (greater than 256K) when multi-threading I/O (greater than 4 threads).



-Some performance improvement (0-13%) seen for large blocksizes (256K or greater) with high multi-threading (greater than 4 threads).



Raid 1:



-Performance degradation (0-17%) seen for all block I/O sizes with low multi-threading (less than 4 threads), and medium block I/O's (16K) with less than 32 threads.



- Performance improvement (28-90%) seen for small block I/O's (4K or less) with higher multi-threading (4-32 threads).



-Slight performance improvement (4-6%) for large block writes (64K or greater) with higher multi-threading (4-32 threads) and for all large block writes (greater than 256K) with multi-threading (2 or more threads).



Random Writes



Raid 5:



-Performance degradation (1-38%) seen for small block I/O sizes (4K or less) with less penalty (less than 1%) for high multi-threading (greater than 16 threads), and greater penalty for no multi-threading (32-38%).



-Performance degradation (18-32%) seen for larger block size I/O's (16-64K) with no multi-threading.



-Minor degradation (1-4%) seen for larger I/O blocksizes (16-64K) if multi-threading (greater than 2 threads).



-Slight performance improvement (7-8%) seen for very large block sizes (256K or greater) and all ranges of I/O threading.



Raid 1:



-Performance degradation (8-40%) for small block I/O sizes (4K or less) and for larger block sizes (16-64K) if no multi-threading.



-Performance improvement (0-22%) for larger block sizes (16K or greater) with multi-threading (2 or more I/O threads) and for small block sizes (4K or greater) with high multi-threading (greater than 16 threads).



 



SE3310 comparison testing was conducted between revisions 3.25S and 4.13B of the controller firmware in the following configurations:



Cache Optimization Mode - Random




  • dual controller 3310 array


  • 2 Logical Drives (6x72 GB drives per LD )


  • each LD assigned to one controller,


  • stripe size: 32KB


  • optimization mode: random


  • All other settings were the 4.13 default values (with Media scan off)


  • Raid 1 and Raid 5



With various types of data:




  • read
    vs write,


  • random vs sequential I/O,


  • Number of I/O threads (queues): 1,2,4,8,16,32,


  • Host I/O block sizes: 1K, 4K, 16K, 64K, 256K, and 1M



Sequential Read I/O:



Raid 5:



-Performance degradation (15-42%) seen for all ranges of I/O's with high performance impact (28-42%) on I/O's with small blocks (16K-1K).



Raid 1:



-Total Performance degradation (17-45%) seen for all block sizes and all ranges of I/O threads with high perfromance impact (29-45%) with small block I/O's (16K or less)



Random Read I/O:



Raid 5:



-Low Performance degradation (less than 6%) seen for all ranges of IO threads with various block size I/O's (1K-64K).



-Performance improvement (34-82%) seen for all ranges of IO threads with various block size I/O's (256K-1M).



Raid 1:



-Performance degradation (1-22%) seen for majority of block size I/O's (64K or less) with all range of I/O threads.



-Performance improvement (12-54%) seen for large block size I/O's (256K or above) with all range of I/O threads with exception of slight performance degradation (6%) with 256K block size, single threaded I/O.



Sequential Write I/O:



Raid 5:



-Performance degradation (0-33%) seen for various block size IO's (4K-64K) with all ranges of I/O threads.



-Performance improvement (11-50%) seen for 1K block size with multi-threaded I/O (4 or greater threads) and performance degradation (1-7%) seen with low multi-threaded I/O (less than 4 threads).



-Slight performance improvement (1-7%) seen for larger block I/O's (256k-1M) and multi-threaded I/O (greater than 4 threads) and performance degradation (0-30%) seen with low multi-threaded I/O (less than 4 threads).



Raid 1:



-Performance degradation (5-39%) seen for various block I/O sizes (4K or greater) with all ranges of I/O threads.



-Performance improvement (12-50%) seen for 1K block size with multi-threaded I/O (4-32 threads) and performance degradation (3-8%) seen with low multi-threaded I/O (2 or less).



Random Writes:



Raid 5:



-Performance degradation (0-29%) seen for various block I/O sizes (1K-256K ) with lower multi-threaded I/O (4 or less threads). But, slight performance degradation (7%) seen with 1M block I/Os which are single threaded.



-Performance improvement (4-60%) seen for large block I/O sizes (16K or greater) with multi-threaded I/O (greater than 4 threads). Performance improvement (8-25%) seen for 4K block, multithreaded I/O (16 and above). But, slight improvement (7%) seen for 1K block, multi-threaded I/O's (32 threads).



Raid 1:



-Performance degradation (30-36%) seen for 1K block with all ranges of I/O threads and also, performance degradation (5-20%) seen for large 1M block with all ranges of I/O threads (1-32 threads).



-performance degradation (4-31%) for various block size I/O's (4k-256K) with low multi-threaded I/O (4 or less threads).



-Performance improvement (0-20%) for various block size I/O's (4K-64K) with high multi-threaded I/O (16 or above).



Note: No testing for the SE3511 array has been conducted, so there is no data available to include in this alert. Since the 4.1x code base is shared with this product, it is believed there will be a performance impact seen with the SE3511 array as well.


Internal Contributor/submitter
[email protected]

Internal Eng Business Unit Group
NWS (Network Storage)

Internal Eng Responsible Engineer
[email protected]

Internal Services Knowledge Engineer
[email protected]

Internal Escalation ID
1-12418087, 1-11504571, 1-12096081, 1-13787645, 1-12760796, 1-13463921, 1-13695929

Internal Sun Alert Kasp Legacy ID
102127

Internal Sun Alert & FAB Admin Info
Critical Category: Availability ==> Pervasive
Significant Change Date: 2006-01-12, 2007-03-14
Avoidance: Workaround
Responsible Manager: null
Original Admin Info: [WF 14-Mar-2007, dave m: no more updates, this is closed with no Resolution except the Release Notes, no Eng fix]
Engineering Notification Interval: 0
[WF 12-Sep-2006, Dave M: engineering will not provide a fix for this issue, but wants to leave it open going forward]
[WF 26-Jan-2006, Dave M: updated CF per Eng and customer]
[WF 12-Jan-2006, Dave M: ready for release, will release today]
[WF 05-Jan-2006, Dave M: 24hr review completed, Chessin changes made, all docs in this series on hold for Exec approval pending 1/12]
[WF 04-Jan-2006, Dave M: final edits before sending for final review]
[WF 02-Jan-2006, Dave M: draft created]
Product_uuid
3db30178-43d7-4d85-8bbe-551c33040f0d|Sun StorageTek 3310 SCSI Array
58553d0e-11f4-11d7-9b05-ad24fcfd42fa|Sun StorageTek 3510 FC Array
9fdbb196-73a6-11d8-9e3a-080020a9ed93|Sun StorageTek 3511 SATA Array

Attachments
This solution has no attachment
  Copyright © 2011 Sun Microsystems, Inc.  All rights reserved.
 Feedback