Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1969

Allow raid to use Reed-Solomon erasure codes

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: contrib/raid
    • Labels:
      None

      Description

      Currently raid uses one parity block per stripe which corrects one missing block on one stripe.
      Using Reed-Solomon code, we can add any number of parity blocks to tolerate more missing blocks.
      This way we can get a good file corrupt probability even if we set the replication to 1.

      Here are some simple comparisons:
      1. No raid, replication = 3:
      File corruption probability = O(p^3), Storage space = 3x

      2. Single parity raid with stripe size = 10, replication = 2:
      File corruption probability = O(p^4), Storage space = 2.2x

      3. Reed-Solomon raid with parity size = 4 and stripe size = 10, replication = 1:
      File corruption probability = O(p^5), Storage space = 1.4x

      where p is the missing block probability.
      Reed-Solomon code can save lots of space without compromising the corruption probability.

      To achieve this, we need some changes to raid:
      1. Add a block placement policy that knows about raid logic and do not put blocks on the same stripe on the same node.
      2. Add an automatic block fixing mechanism. The block fixing will replace the replication of under replicated blocks.
      3. Allow raid to use general erasure code. It is now hard coded using Xor.
      4. Add a Reed-Solomon code implementation

      We are planing to use it on the older data only.
      Because setting replication = 1 hurts the data locality.

        Issue Links

        There are no Sub-Tasks for this issue.

          Activity

          Hide
          Dick King added a comment -

          Proposal 3 would have to be applied only to data that essentially never gets deleted, because deleting a block would affect four parity blocks.

          Show
          Dick King added a comment - Proposal 3 would have to be applied only to data that essentially never gets deleted, because deleting a block would affect four parity blocks.
          Hide
          dhruba borthakur added a comment -

          for all these proposals, the unwritten assumption is that all the blocks in a stripe belong to the same hdfs file. In that case, when the data file is deleted, the parity file can be deleted too.

          Show
          dhruba borthakur added a comment - for all these proposals, the unwritten assumption is that all the blocks in a stripe belong to the same hdfs file. In that case, when the data file is deleted, the parity file can be deleted too.
          Hide
          Wittawat Tantisiriroj added a comment -

          How fast this RS implementation encode per sec? In case we need a faster encoder, I am thinking about porting Cauchy Reed-Solomon as described @ http://www.cs.utk.edu/~plank/plank/papers/FAST-2009.pdf to Java. James S. Plank, the author, has already given me a permission to release it with Apache License.

          Show
          Wittawat Tantisiriroj added a comment - How fast this RS implementation encode per sec? In case we need a faster encoder, I am thinking about porting Cauchy Reed-Solomon as described @ http://www.cs.utk.edu/~plank/plank/papers/FAST-2009.pdf to Java. James S. Plank, the author, has already given me a permission to release it with Apache License.
          Hide
          Ramkumar Vadali added a comment -

          Our feeling is that IO costs will dominate CPU cost, but we do not have experimental results yet.

          Show
          Ramkumar Vadali added a comment - Our feeling is that IO costs will dominate CPU cost, but we do not have experimental results yet.
          Hide
          Scott Chen added a comment -

          Wittawat:

          The RS implementation has a complexity of
          O(n^2) where n is the parity length.

          In our case, the parity length is really small (we pick 4).
          So we think the efficiency should not be a problem here.

          Show
          Scott Chen added a comment - Wittawat: The RS implementation has a complexity of O(n^2) where n is the parity length. In our case, the parity length is really small (we pick 4). So we think the efficiency should not be a problem here.

            People

            • Assignee:
              Ramkumar Vadali
              Reporter:
              Scott Chen
            • Votes:
              1 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

              • Created:
                Updated:

                Development