Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1969

Allow raid to use Reed-Solomon erasure codes



    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • None
    • None
    • contrib/raid
    • None


      Currently raid uses one parity block per stripe which corrects one missing block on one stripe.
      Using Reed-Solomon code, we can add any number of parity blocks to tolerate more missing blocks.
      This way we can get a good file corrupt probability even if we set the replication to 1.

      Here are some simple comparisons:
      1. No raid, replication = 3:
      File corruption probability = O(p^3), Storage space = 3x

      2. Single parity raid with stripe size = 10, replication = 2:
      File corruption probability = O(p^4), Storage space = 2.2x

      3. Reed-Solomon raid with parity size = 4 and stripe size = 10, replication = 1:
      File corruption probability = O(p^5), Storage space = 1.4x

      where p is the missing block probability.
      Reed-Solomon code can save lots of space without compromising the corruption probability.

      To achieve this, we need some changes to raid:
      1. Add a block placement policy that knows about raid logic and do not put blocks on the same stripe on the same node.
      2. Add an automatic block fixing mechanism. The block fixing will replace the replication of under replicated blocks.
      3. Allow raid to use general erasure code. It is now hard coded using Xor.
      4. Add a Reed-Solomon code implementation

      We are planing to use it on the older data only.
      Because setting replication = 1 hurts the data locality.


        Issue Links



              rvadali Ramkumar Vadali
              schen Scott Chen
              1 Vote for this issue
              20 Start watching this issue