Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7979

Initialize block report IDs with a random number

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.7.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: datanode
    • Labels:
      None
    • Target Version/s:

      Description

      Right now block report IDs use system nanotime. This isn't that random, so let's start it at a random number for some more safety.

      1. HDFS-7979.001.patch
        2 kB
        Andrew Wang
      2. HDFS-7979.002.patch
        2 kB
        Andrew Wang
      3. HDFS-7979.003.patch
        3 kB
        Andrew Wang
      4. HDFS-7979.004.patch
        3 kB
        Andrew Wang

        Issue Links

          Activity

          Hide
          andrew.wang Andrew Wang added a comment -

          Patch attached. I think the nanotime isn't more useful than a counter with a random start point, so that's what I changed it to. LMK what you think.

          Also added the interface annotation to BlockReportContext that Yi Liu asked for.

          Show
          andrew.wang Andrew Wang added a comment - Patch attached. I think the nanotime isn't more useful than a counter with a random start point, so that's what I changed it to. LMK what you think. Also added the interface annotation to BlockReportContext that Yi Liu asked for.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12706834/HDFS-7979.001.patch
          against trunk revision 50ee8f4.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.tracing.TestTracing

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10050//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10050//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12706834/HDFS-7979.001.patch against trunk revision 50ee8f4. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.tracing.TestTracing Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10050//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10050//console This message is automatically generated.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I am not sure about this patch for a few reasons:

          • When using the monotonic time, two block reports cannot get the same ID since the monotonic time is always increasing. We don't have the same guarantee here. Admittedly, the chances of a repeat are extremely low. But previously they were effectively 0, and now they're nonzero.
          • If the datanode is taken down and restarted, the monotonic time will still be higher than before. And so the current behavior makes it easy to see from the logs that block report N+1 came after block report N, even if there was a datanode restart in between. We don't have this behavior with a random number generated on datanode start.

          I also don't think a non-random block report ID is a security concern. If block reports need to be secured, the correct way to do it is to use encryption-over-the-wire via SASL. If SASL is not in use, any evildoer can submit a fake full block report that says that everything is deleted, or talk about bogus blocks that don't really exist on the datanode. Indeed, even after this patch is applied, it would be easy for a black hat to submit a new block report with a new random ID and cause the NN to delete all the storages on that DN. So essentially the motivation for this patch is not valid in my opinion.

          Show
          cmccabe Colin P. McCabe added a comment - I am not sure about this patch for a few reasons: When using the monotonic time, two block reports cannot get the same ID since the monotonic time is always increasing. We don't have the same guarantee here. Admittedly, the chances of a repeat are extremely low. But previously they were effectively 0, and now they're nonzero. If the datanode is taken down and restarted, the monotonic time will still be higher than before. And so the current behavior makes it easy to see from the logs that block report N+1 came after block report N, even if there was a datanode restart in between. We don't have this behavior with a random number generated on datanode start. I also don't think a non-random block report ID is a security concern. If block reports need to be secured, the correct way to do it is to use encryption-over-the-wire via SASL. If SASL is not in use, any evildoer can submit a fake full block report that says that everything is deleted, or talk about bogus blocks that don't really exist on the datanode. Indeed, even after this patch is applied, it would be easy for a black hat to submit a new block report with a new random ID and cause the NN to delete all the storages on that DN. So essentially the motivation for this patch is not valid in my opinion.
          Hide
          andrew.wang Andrew Wang added a comment -

          Sorry if I wasn't clear about this, but "safety" here is about not repeating. The monotonic time is the time since DN startup, so if the machine goes down and comes back up, it'll be in the same range of monotonic times. Agree that it doesn't reset if just the DN process is restarted.

          When using the monotonic time, two block reports cannot get the same ID since the monotonic time is always increasing. We don't have the same guarantee here. Admittedly, the chances of a repeat are extremely low. But previously they were effectively 0, and now they're nonzero.

          This seems less likely to repeat compared to monotonic time, because of machine reboots.

          If the datanode is taken down and restarted, the monotonic time will still be higher than before. And so the current behavior makes it easy to see from the logs that block report N+1 came after block report N, even if there was a datanode restart in between. We don't have this behavior with a random number generated on datanode start.

          This is true, but not sure how useful this is. If there's logging on both the DN and NN, that would also be a way of determining ordering.

          Show
          andrew.wang Andrew Wang added a comment - Sorry if I wasn't clear about this, but "safety" here is about not repeating. The monotonic time is the time since DN startup, so if the machine goes down and comes back up, it'll be in the same range of monotonic times. Agree that it doesn't reset if just the DN process is restarted. When using the monotonic time, two block reports cannot get the same ID since the monotonic time is always increasing. We don't have the same guarantee here. Admittedly, the chances of a repeat are extremely low. But previously they were effectively 0, and now they're nonzero. This seems less likely to repeat compared to monotonic time, because of machine reboots. If the datanode is taken down and restarted, the monotonic time will still be higher than before. And so the current behavior makes it easy to see from the logs that block report N+1 came after block report N, even if there was a datanode restart in between. We don't have this behavior with a random number generated on datanode start. This is true, but not sure how useful this is. If there's logging on both the DN and NN, that would also be a way of determining ordering.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I think it's pretty unlikely that we would restart the DN machine and then restart the DN exactly at the monotonic time in nanoseconds that we terminated the old DN. For one thing, DNs usually stay up for days, and so you would have to be starting the DN days after the PC started.

          I guess I have to admit that choosing a number randomly has a smaller chance of collision, though. In the random case you truly have a 1 / 2^^64 chance, and you have a greater chance when using monotonic time.

              return ++prevBlockReportId;
          

          I think this should be like:

          prevBlockReportId++;
          while (prevBlockReportId == 0) {
             prevBlockReportId = random.nextLong();
          }
          return prevBlockReportId;
          

          To eliminate the (admittedly astronomically unlikely) chance of returning 0.
          Also please don't create a new Random each time.

          otherwise +1

          Show
          cmccabe Colin P. McCabe added a comment - I think it's pretty unlikely that we would restart the DN machine and then restart the DN exactly at the monotonic time in nanoseconds that we terminated the old DN. For one thing, DNs usually stay up for days, and so you would have to be starting the DN days after the PC started. I guess I have to admit that choosing a number randomly has a smaller chance of collision, though. In the random case you truly have a 1 / 2^^64 chance, and you have a greater chance when using monotonic time. return ++prevBlockReportId; I think this should be like: prevBlockReportId++; while (prevBlockReportId == 0) { prevBlockReportId = random.nextLong(); } return prevBlockReportId; To eliminate the (admittedly astronomically unlikely) chance of returning 0. Also please don't create a new Random each time. otherwise +1
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for reviewing Colin. I was doing the zero test was just to initialize prevBlockReportId, I don't think there's anything actually wrong with a zero value.

          To be more clear (and avoid the chance of re-initializing), I changed it to be a Long with a null check instead. I left the Random being allocated on the stack since it really should be used only once now, ever. LMK what you think.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for reviewing Colin. I was doing the zero test was just to initialize prevBlockReportId, I don't think there's anything actually wrong with a zero value. To be more clear (and avoid the chance of re-initializing), I changed it to be a Long with a null check instead. I left the Random being allocated on the stack since it really should be used only once now, ever. LMK what you think.
          Hide
          cmccabe Colin P. McCabe added a comment -

          The boxed Long is immutable, so you are creating more garbage by going down this route. Plus, you might inadvertently send a 0 block ID, which is one thing I wanted to avoid. Let's just use a primitive.

          Show
          cmccabe Colin P. McCabe added a comment - The boxed Long is immutable, so you are creating more garbage by going down this route. Plus, you might inadvertently send a 0 block ID, which is one thing I wanted to avoid. Let's just use a primitive.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12707868/HDFS-7979.002.patch
          against trunk revision 05499b1.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10094//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10094//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12707868/HDFS-7979.002.patch against trunk revision 05499b1. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10094//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10094//console This message is automatically generated.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I don't think there's anything actually wrong with a zero value.

          The block report ID is initialized to zero when the NN first creates the object representing the DN storage. By never sending 0 as a block report ID, we ensure that we don't appear to be continuing an existing block report on the first full block report. I realize this is a very unlikely scenario, but why add more failure cases when it's easy to exclude them?

          Show
          cmccabe Colin P. McCabe added a comment - I don't think there's anything actually wrong with a zero value. The block report ID is initialized to zero when the NN first creates the object representing the DN storage. By never sending 0 as a block report ID, we ensure that we don't appear to be continuing an existing block report on the first full block report. I realize this is a very unlikely scenario, but why add more failure cases when it's easy to exclude them?
          Hide
          andrew.wang Andrew Wang added a comment -

          Good point about 0 being an invalid value, I forgot about that on the NN side. I used your recommended snippet in this newest rev. Thanks for reviewing!

          Show
          andrew.wang Andrew Wang added a comment - Good point about 0 being an invalid value, I forgot about that on the NN side. I used your recommended snippet in this newest rev. Thanks for reviewing!
          Hide
          cmccabe Colin P. McCabe added a comment -

          Unfortunately, this latest patch always starts with 1...

          Let's initialize the prevBlockReportId with Random.nextLong.

          Show
          cmccabe Colin P. McCabe added a comment - Unfortunately, this latest patch always starts with 1... Let's initialize the prevBlockReportId with Random.nextLong.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12708780/HDFS-7979.003.patch
          against trunk revision ed72daa.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
          org.apache.hadoop.hdfs.TestFileAppend
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10151//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10151//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12708780/HDFS-7979.003.patch against trunk revision ed72daa. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr org.apache.hadoop.hdfs.TestFileAppend org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10151//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10151//console This message is automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Woops, one more try. I also discovered DFSUtil.getRandom() which saves us from making our own Randoms!

          Show
          andrew.wang Andrew Wang added a comment - Woops, one more try. I also discovered DFSUtil.getRandom() which saves us from making our own Randoms!
          Hide
          cmccabe Colin P. McCabe added a comment -

          Thanks. +1 pending jenkins

          I also discovered DFSUtil.getRandom() which saves us from making our own Randoms!

          There's also ThreadLocalRandom in the standard library now.

          Show
          cmccabe Colin P. McCabe added a comment - Thanks. +1 pending jenkins I also discovered DFSUtil.getRandom() which saves us from making our own Randoms! There's also ThreadLocalRandom in the standard library now.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12724056/HDFS-7979.004.patch
          against trunk revision cc25823.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10220//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10220//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12724056/HDFS-7979.004.patch against trunk revision cc25823. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client: org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10220//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10220//console This message is automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for reviewing Colin, pushed to trunk and branch-2. The failed test looks unrelated, and in fact doesn't show up if you click through to Jenkins.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for reviewing Colin, pushed to trunk and branch-2. The failed test looks unrelated, and in fact doesn't show up if you click through to Jenkins.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7543 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7543/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7543 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7543/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #892 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/892/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #892 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/892/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/)
          HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/ ) HDFS-7979 . Initialize block report IDs with a random number. (andrew.wang: rev b1e059089d6a5b2b7006d7d384c6df81ed268bd9) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

            People

            • Assignee:
              andrew.wang Andrew Wang
              Reporter:
              andrew.wang Andrew Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development