Hadoop Common
  1. Hadoop Common
  2. HADOOP-561

one replica of a file should be written locally if possible

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.8.0
    • Component/s: None
    • Labels:
      None

      Description

      one replica of a file should be written locally if possible. That's currently not the case.
      Copying a 1GB file using hadoop dfs -cp running on one of the cluster nodes, all the blocks were written to remote nodes, as seen by fsck -files -blocks -locations on the newly created file.

      as long as there is sufficient space locally, a local copy has significant performance benefits.

      1. localreplica.patch
        3 kB
        dhruba borthakur

        Activity

        Owen O'Malley made changes -
        Component/s dfs [ 12310710 ]
        Doug Cutting made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Doug Cutting made changes -
        Resolution Fixed [ 1 ]
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Fix Version/s 0.8.0 [ 12312098 ]
        dhruba borthakur made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        dhruba borthakur made changes -
        Attachment localreplica.patch [ 12343281 ]
        dhruba borthakur made changes -
        Field Original Value New Value
        Assignee dhruba borthakur [ dhruba ]
        Yoram Arnon created issue -

          People

          • Assignee:
            dhruba borthakur
            Reporter:
            Yoram Arnon
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development