The end of the line for RAID?

| 1 Comment
Regarding this: http://www.enterprisestorageforum.com/technology/features/article.php/3839636

He has a point. Storage sizes are increasing faster than reliability figures, and the combination is a very bad thing for parity RAID. Size by itself means that large RAID sets will take a long time to rebuild. I ran into this directly with the MSA1500 I was working with a while back, where it would take a week (7 whole days!) to rework a 7TB disk-array. The same firmware also very strongly recommended against RAID5 LUNs on more than 7TB of of disks due to the non-recoverable read error rate on the SATA drives being used. RAID6 increase the durability of parity RAID, but at the cost of increased overhead.

Unfortunately, there are no clear answers. What you need really depends on what you're using it for. For very high performance storage where random I/O latency during high speed transfers are your prime performance metric, lots of cheap-ass SATA drives on randomized RAID1 pairs will probably not be enough to keep up. Data-retention archives where sequential write speeds are your prime metric is more forgiving and can take a much different storage architecture, even though it may involved an order of magnitude more space than the first option here.

One comment deserves attention, though:
The fact is that 20 years ago, a large chunk of storage was a 300MB ESDI drive for $1500, but now a large drive is hard to find above $200.
Well, for large hard drives that may be true but for medium size drives I can show you many options that break the $200 barrier. 450GB FC drives? Over $200 by quite a lot. Anything SSD-Enterprise? Over $200 by a lot, and the 'large drive' segment is at an order of magnitude over that.

We're going to see some interesting storage architectures in the near future. That much is for sure.

1 Comment

doesnt this also have something to do with error bit rates approaching the volume sizes