1、,存储高级 数据保护技术- DDP说明,1,Volumes,SANtricity RAID Protection,Volume groups RAID 0, 1, 10, 5, 6 Intermix RAID levels Various group sizesDynamic disk pools Min 11 SSDs Max 120 SSDs Up to 10 disk pools per system,2,NetApp Confidential,Volume Groups,Volumes,Host LUNs,SSDs,Disk Pool,SSDs,Host LUNs,SANtricity
2、 RAID Levels,RAID 0 stripedRAID 5 data disks and rotating parity,RAID 1 (10) mirrored and stripedRAID 6 (P+Q) data disks and rotating dual parity,3,NetApp Confidential,Block-level striping with a distributed parity,Traditional RAID Volumes,Disk drives organized into RAID groups Volumes reside across
3、 the drives in a RAID group Performance is dictated by the number of spindles Hot spares sit idle until a drive fails Spare capacity is “stranded”,24-drive system with 2 10-drive groups (8+2) and 4 hot spares,4,Traditional RAIDDrive Failure,Data is reconstructed onto hot spare Single drive responsib
4、le for all writes (bottleneck) Reconstruction happens linearly (one stripe at a time) All volumes in that group are significantly impacted,24-drive system with 2 10-drive groups (8+2) and 4 hot spares,5,The Problem,The Large-Disk-Drive Challenge Staggering amounts of data to store, protect, access S
5、ome sites have thousands of large-capacity drives Drive failures are continual, particularly with NL-SAS drives Production I/O is impacted during rebuilds Up to 40% in many cases As drive capacities continue to grow, traditional RAID protection is pushed to its limit Drive transfer rates have not ke
6、pt up with capacities Larger drives equal longer rebuildsanywhere from 10+ hours to several days,6,4TB+,Dynamic Disk Pools,Maintain SLAs during drive failure Stay in the green Performance drop is minimized following drive failure Dynamic rebalance completes up to 8x faster than traditional RAID in r
7、andom environments and up to 2x faster in sequential environments Large pool of spindles for every volume reduces hot spots Each volume spread across all drives in pool Dynamic distribution/redistribution is a nondisruptive background operation,7,Balanced: Algorithm randomly spreads data across all
8、drives, balancing workload and rebuilding if necessary.,Easy: No RAID or idle spares to manage active spare capacity on all drives.,Combining effort: All drives in the pool sustain the workloadperfect for virtual mixed workloads or fast reconstruction if needed.,Flexible: Add ANY* number of drives f
9、or additional capacitysystem automatically rebalances data for optimal performance.,Traditional RAID Technology,Innovative Dynamic Disk Pools,8,“With Dynamic Disk Pools, you can add or lose disk drives without impact, reconfiguration, or headaches.”,* After the minimum of 11.,9,Data Rebalancing in M
10、inutes vs. Days,Hours,2.5 Days,1.3 Days,Typical rebalancing improvements are based on a 24-disk mixed workload,More than 4 Days,Business Impact,99% Exposure Improvement,Maintain business SLAs with a drive failure,RAID Level Comparison,10,NetApp Confidential,Dynamic Disk Pools Overview,DDP dynamicall
11、y distributes data, spare capacity, and parity information across a pool of SSDs All drives are active (no idle hot spares) Spare capacity is available to all volumes Data is dynamically recreated/redistributed whenever pools grows or shrinks,11,NetApp Confidential,DDP: Simplicity, Performance, Prot
12、ection,Simplified administration No RAID sets or hot spares to manage Data is automatically balanced within pool Flexible disk pool sizing optimizes capacity utilization Consistent performance Data is distributed throughput the pool (no hot spots) Performance drop is minimized during a drive rebuild
13、 Significantly faster return to optimal state Relentless data protection Significantly faster rebuild times as data is reconstructed throughout the disk pool Prioritized reconstruction minimizes exposure,12,NetApp Confidential,DDP Insight: How It Works,Each DDP volume is composed of some number of 4
14、GB “virtual stripes” called dynamic stripes Each D-stripe resides on a pseudo-randomly selected set of 10 drives from within the pool D-stripes are allocated at time of volume creation and allocated sequentially on a per-volume basis,13,NetApp Confidential,24-SSD pool,DDP SSD Failure,For each D-Stri
15、pe which has data on failed SSD: Segments on other SSDs are read to recreate data A new SSD is chosen to write segments from failed SSD Rebuild operations run in parallel across all SSDs,14,NetApp Confidential,DDP Multiple Disk Failure,If two SSDs have failed system will rebuild critical segments fi
16、rst Brown and Light Blue,15,NetApp Confidential,If additional SSDs fail new critical segments will be identified and rebuilt Blue, Orange and Pink,DDP Adding SSDs To Pool,Add a single SSD or add multiple SSDs simultaneously Immediately rebalances data to maintain equilibrium Segments are just moved
17、(does not reconstruct RAID),16,NetApp Confidential,23-SSD pool,Dynamic Stripe: A Closer Look,Data is written within a D-stripe using RAID 6 Each RAID 6 stripe is 1MB (8+2 with 128K segment size) Each D-stripe has 4,096 traditional RAID 6 stripes A D-piece is one drives worth of D-stripe data,17,NetApp Confidential,D-Stripe,DDP Versus RAID 6,18,EF-Series Performance Comparison,19,NetApp Confidential,EF550 Performance: 8K Random 75% Read / 25% Write,20,NetApp Confidential,