Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-5903

FSVolumeList#initializeReplicaMaps(..) not doing anything, it can be removed

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: datanode
    • Labels:
      None

      Description

        void initializeReplicaMaps(ReplicaMap globalReplicaMap) throws IOException {
          for (FsVolumeImpl v : volumes) {
            v.getVolumeMap(globalReplicaMap);
          }
        }

      This method has been called at the time of initialization even before the blockpools are added. So its useless to call this method.

      Anyway replica map will be updated for each of the blockpool during addBlockPool(..) by calling FSVolumesList#getAllVolumesMap(..)

      1. HDFS-5903.patch
        3 kB
        Vinayakumar B
      2. HDFS-5903.patch
        3 kB
        Vinayakumar B

        Activity

        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12627627/HDFS-5903.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6068//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6068//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627627/HDFS-5903.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6068//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6068//console This message is automatically generated.
        Hide
        Vinayakumar B added a comment -

        Attaching same patch to trigger jenkins again.

        Show
        Vinayakumar B added a comment - Attaching same patch to trigger jenkins again.
        Hide
        Vinayakumar B added a comment -
        #
        # There is insufficient memory for the Java Runtime Environment to continue.
        # Native memory allocation (malloc) failed to allocate 32776 bytes for Chunk::new
        # An error report file with more information is saved as:
        # /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hs_err_pid21355.log

        This is not compilation problem. While running javadoc jenkins got OOME.

        Show
        Vinayakumar B added a comment - # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 32776 bytes for Chunk::new # An error report file with more information is saved as: # /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hs_err_pid21355.log This is not compilation problem. While running javadoc jenkins got OOME.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12627610/HDFS-5903.patch
        against trunk revision .

        -1 patch. Trunk compilation may be broken.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6067//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627610/HDFS-5903.patch against trunk revision . -1 patch . Trunk compilation may be broken. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6067//console This message is automatically generated.
        Hide
        Vinayakumar B added a comment -

        Attaching a patch
        Please review

        Show
        Vinayakumar B added a comment - Attaching a patch Please review

          People

          • Assignee:
            Vinayakumar B
            Reporter:
            Vinayakumar B
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:

              Development