Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1432

HDFS across data centers: HighTide

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotAdd voteVotersWatch issueWatchersCreate sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      There are many instances when the same piece of data resides on multiple HDFS clusters in different data centers. The primary reason being that the physical limitation of one data center is insufficient to host the entire data set. In that case, the administrator(s) typically partition that data into two (or more) HDFS clusters on two different data centers and then duplicates some subset of that data into both the HDFS clusters.

      In such a situation, there will be six physical copies of data that is duplicated, three copies in one data center and another three copies in another data center. It would be nice if we can keep fewer than 3 replicas on each of the data centers and have the ability to fix a replica in the local data center by copying data from the remote copy in the remote data center.

        Attachments

        Issue Links

          Activity

          $i18n.getText('security.level.explanation', $currentSelection) Viewable by All Users
          Cancel

            People

            • Assignee:
              dhruba Dhruba Borthakur Assign to me
              Reporter:
              dhruba Dhruba Borthakur

              Dates

              • Created:
                Updated:

                Issue deployment