Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3702

Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

    Details

    • Target Version/s:
    • Release Note:
      Hide
      This patch will attempt to allocate all replicas to remote DataNodes, by adding local DataNode to the excluded DataNodes. If no sufficient replicas can be obtained, it will fall back to default block placement policy, which writes one replica to local DataNode.
      Show
      This patch will attempt to allocate all replicas to remote DataNodes, by adding local DataNode to the excluded DataNodes. If no sufficient replicas can be obtained, it will fall back to default block placement policy, which writes one replica to local DataNode.

      Description

      This is useful for Write-Ahead-Logs: these files are writen for recovery only, and are not read when there are no failures.

      Taking HBase as an example, these files will be read only if the process that wrote them (the 'HBase regionserver') dies. This will likely come from a hardware failure, hence the corresponding datanode will be dead as well. So we're writing 3 replicas, but in reality only 2 of them are really useful.

      1. HDFS-3702_Design.pdf
        63 kB
        Lei (Eddy) Xu
      2. HDFS-3702.000.patch
        20 kB
        Lei (Eddy) Xu
      3. HDFS-3702.001.patch
        19 kB
        Lei (Eddy) Xu
      4. HDFS-3702.002.patch
        25 kB
        Lei (Eddy) Xu
      5. HDFS-3702.003.patch
        23 kB
        Lei (Eddy) Xu
      6. HDFS-3702.004.patch
        75 kB
        Lei (Eddy) Xu
      7. HDFS-3702.005.patch
        75 kB
        Lei (Eddy) Xu
      8. HDFS-3702.006.patch
        75 kB
        Lei (Eddy) Xu
      9. HDFS-3702.007.patch
        74 kB
        Lei (Eddy) Xu
      10. HDFS-3702.008.patch
        75 kB
        Lei (Eddy) Xu
      11. HDFS-3702.009.patch
        75 kB
        Lei (Eddy) Xu
      12. HDFS-3702.010.patch
        78 kB
        Lei (Eddy) Xu
      13. HDFS-3702.011.patch
        79 kB
        Lei (Eddy) Xu
      14. HDFS-3702.012.patch
        79 kB
        Lei (Eddy) Xu

        Issue Links

          Activity

          Hide
          sureshms Suresh Srinivas added a comment -

          Description is not clear to me. When you say "NOT write the block locally", do you mean not selecting the local node for the write pipeline? That is block placement should use all non local nodes?

          Also while machine failure is one of the issues, there are othe issues to consider as well. Failure could be due to disk failure, data ode or region server daemon failures.

          Show
          sureshms Suresh Srinivas added a comment - Description is not clear to me. When you say "NOT write the block locally", do you mean not selecting the local node for the write pipeline? That is block placement should use all non local nodes? Also while machine failure is one of the issues, there are othe issues to consider as well. Failure could be due to disk failure, data ode or region server daemon failures.
          Hide
          nkeywal Nicolas Liochon added a comment -

          When you say "NOT write the block locally", do you mean not selecting the local node for the write pipeline? That is block placement should use all non local nodes?

          Yes, exactly.

          Also while machine failure is one of the issues, there are othe issues to consider as well. Failure could be due to disk failure, data ode or region server daemon failures.

          Yes, exactly. The later (region server crash) is the one where writing locally is not an issue. For the first two, if the region server does not fail we won't read the data. But this case is less critical than a machine failure for a regionserver crash we have the 3 replicas available in the different datanode. It's more an issue when we've lost the machine, hence we start with 2 replicas at most.

          Show
          nkeywal Nicolas Liochon added a comment - When you say "NOT write the block locally", do you mean not selecting the local node for the write pipeline? That is block placement should use all non local nodes? Yes, exactly. Also while machine failure is one of the issues, there are othe issues to consider as well. Failure could be due to disk failure, data ode or region server daemon failures. Yes, exactly. The later (region server crash) is the one where writing locally is not an issue. For the first two, if the region server does not fail we won't read the data. But this case is less critical than a machine failure for a regionserver crash we have the 3 replicas available in the different datanode. It's more an issue when we've lost the machine, hence we start with 2 replicas at most.
          Hide
          stack stack added a comment -

          One option might be to put in place a block policy that wrote the first replica local for all files but those that had a WAL-looking file path; i.e. look at the file path and made determination based on it (Dhruba suggests it over in HDFS-1451 which asks that we be able to set policy per file).

          Show
          stack stack added a comment - One option might be to put in place a block policy that wrote the first replica local for all files but those that had a WAL-looking file path; i.e. look at the file path and made determination based on it (Dhruba suggests it over in HDFS-1451 which asks that we be able to set policy per file).
          Hide
          sureshms Suresh Srinivas added a comment -

          One option might be to put in place a block policy that wrote the first replica local for all files but those that had a WAL-looking file path

          A mechanism to choose block placement policy makes sense. Instead of based on file name, choosing block placement policy during file creation/append would be more generic. We could allow configuration to include named block placement policies, such as "default", "hbase-wal" etc. This can be either passed in create method call as an option or through Configuration when FileSystem instance is created.

          Related question, should the block placement chosen persisted per file or only create/append time. If it is not persisted, during active replication it is possible that replicas end up in ways where the block placement policy is not satisfied.

          stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem? HDFS does create an additional replica right?

          Show
          sureshms Suresh Srinivas added a comment - One option might be to put in place a block policy that wrote the first replica local for all files but those that had a WAL-looking file path A mechanism to choose block placement policy makes sense. Instead of based on file name, choosing block placement policy during file creation/append would be more generic. We could allow configuration to include named block placement policies, such as "default", "hbase-wal" etc. This can be either passed in create method call as an option or through Configuration when FileSystem instance is created. Related question, should the block placement chosen persisted per file or only create/append time. If it is not persisted, during active replication it is possible that replicas end up in ways where the block placement policy is not satisfied. stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem? HDFS does create an additional replica right?
          Hide
          nkeywal Nicolas Liochon added a comment -

          stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem? HDFS does create an additional replica right?

          It's not a big problem by itself, just that the real replication count is 2, so it's less safe than 3. Hence the priority set to minor
          It adds other major & critical problems (hence the other jiras), because we try do use the dead node during the recovery (one chance out of 3 per block), so we have increase delays when we recover. As the recovery is distributed, we can be quite sure than one of the reader will take an added delay, may be multiple times, as there are multiple files.

          We will be able to manage this by setting priorities on blocks, but it would be simpler to write the blocks somewhere else instead of skipping then during reads... So I would see this as the best medium term option, for example on branch-2.

          Show
          nkeywal Nicolas Liochon added a comment - stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem? HDFS does create an additional replica right? It's not a big problem by itself, just that the real replication count is 2, so it's less safe than 3. Hence the priority set to minor It adds other major & critical problems (hence the other jiras), because we try do use the dead node during the recovery (one chance out of 3 per block), so we have increase delays when we recover. As the recovery is distributed, we can be quite sure than one of the reader will take an added delay, may be multiple times, as there are multiple files. We will be able to manage this by setting priorities on blocks, but it would be simpler to write the blocks somewhere else instead of skipping then during reads... So I would see this as the best medium term option, for example on branch-2.
          Hide
          atm Aaron T. Myers added a comment -

          Allowing setting of block placement policy per-file or per-stream is an interesting idea, but for this specific issue (wanting more remote replicas of the HBase WAL when a DN dies) how about just setting the replication count for this file to 4 when you open it? You can already do that per-file today in HDFS.

          Show
          atm Aaron T. Myers added a comment - Allowing setting of block placement policy per-file or per-stream is an interesting idea, but for this specific issue (wanting more remote replicas of the HBase WAL when a DN dies) how about just setting the replication count for this file to 4 when you open it? You can already do that per-file today in HDFS.
          Hide
          nkeywal Nicolas Liochon added a comment -

          It would work. But we're adding unnecessary local workload, and (the main issue today) we need on failure to manage the fact that this node is probably dead and should be used only if the two (or three) others are not available. I see it as: when we write we write at the wrong place; when we read we need to take into account that we wrote at the wrong place. But if we write at the right place at the beginning then there is nothing to do in the reads. I though about marking these blocks as corrupted, but it's really extreme, as these blocks could be the only ones left in some cases...

          Show
          nkeywal Nicolas Liochon added a comment - It would work. But we're adding unnecessary local workload, and (the main issue today) we need on failure to manage the fact that this node is probably dead and should be used only if the two (or three) others are not available. I see it as: when we write we write at the wrong place; when we read we need to take into account that we wrote at the wrong place. But if we write at the right place at the beginning then there is nothing to do in the reads. I though about marking these blocks as corrupted, but it's really extreme, as these blocks could be the only ones left in some cases...
          Hide
          stack stack added a comment -

          @Suresh "...choosing block placement policy during file creation/append would be more generic."

          Agreed. On create, just as we can specify replica count, would be sweet.

          If it is not persisted, during active replication it is possible that replicas end up in ways where the block placement policy is not satisfied.

          For our case, this would be fine but it probably wouldn't work for a 'generic' file-based block placement policy (i'd guess).

          stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem?

          Its a problem yes but we deal up in HBase. We'll the check number of replicas as we write the WAL. An API was added (by Dhruba I believe) that allows us ask how many replicas are going on. If less than configured amount, to minimize likelihood of losing data, we shut the WAL and open a new one to get our replica count back up again.

          @ATM The issue is not wanting more remote replicas, the issue is not having a WAL replica that is local to the regionserver. If the node dies, we don't want this node referenced when we have to recover it. We want to avoid wasting time timing out against the dead DN.

          Show
          stack stack added a comment - @Suresh "...choosing block placement policy during file creation/append would be more generic." Agreed. On create, just as we can specify replica count, would be sweet. If it is not persisted, during active replication it is possible that replicas end up in ways where the block placement policy is not satisfied. For our case, this would be fine but it probably wouldn't work for a 'generic' file-based block placement policy (i'd guess). stack and nkeywal, when a node dies, there is a correlated failure and replica count goes down to two. Is this a big problem? Its a problem yes but we deal up in HBase. We'll the check number of replicas as we write the WAL. An API was added (by Dhruba I believe) that allows us ask how many replicas are going on. If less than configured amount, to minimize likelihood of losing data, we shut the WAL and open a new one to get our replica count back up again. @ATM The issue is not wanting more remote replicas, the issue is not having a WAL replica that is local to the regionserver. If the node dies, we don't want this node referenced when we have to recover it. We want to avoid wasting time timing out against the dead DN.
          Hide
          atm Aaron T. Myers added a comment -

          @ATM The issue is not wanting more remote replicas, the issue is not having a WAL replica that is local to the regionserver. If the node dies, we don't want this node referenced when we have to recover it. We want to avoid wasting time timing out against the dead DN.

          Got it. Thanks for the explanation. Note that you can currently lower the value of this timeout. I believe the default is 1 minute. Perhaps an acceptable solution would be to write to 3 replicas with one local as you do today, but on recovery set this timeout low (5 seconds?) so that you move on very quickly to one of the other replicas.

          I'm just throwing it out there. Obviously adding this feature to HDFS is a superior solution, but it could be that HBase could get what it's after today.

          Show
          atm Aaron T. Myers added a comment - @ATM The issue is not wanting more remote replicas, the issue is not having a WAL replica that is local to the regionserver. If the node dies, we don't want this node referenced when we have to recover it. We want to avoid wasting time timing out against the dead DN. Got it. Thanks for the explanation. Note that you can currently lower the value of this timeout. I believe the default is 1 minute. Perhaps an acceptable solution would be to write to 3 replicas with one local as you do today, but on recovery set this timeout low (5 seconds?) so that you move on very quickly to one of the other replicas. I'm just throwing it out there. Obviously adding this feature to HDFS is a superior solution, but it could be that HBase could get what it's after today.
          Hide
          nkeywal Nicolas Liochon added a comment -

          For a file open in writing, we will start with a call to ipc.Client to get the last block length. The timeout in hdfs 1.0.3 is harcoded to 20s (fixed in HADOOP-7397). Hence, anyway, HDFS-3704, but we can also hit HDFS-3701 at this point, so it also a question of data loss. That's why we need to fix (HDFS-3701 and (HDFS-3702 or/and HDFS-3705 or/and HDFS-3703)).

          For reducing the connect timeout, are you speaking about dfs.socket.timeout? Is there a way to change it for connect timeout only?

          Show
          nkeywal Nicolas Liochon added a comment - For a file open in writing, we will start with a call to ipc.Client to get the last block length. The timeout in hdfs 1.0.3 is harcoded to 20s (fixed in HADOOP-7397 ). Hence, anyway, HDFS-3704 , but we can also hit HDFS-3701 at this point, so it also a question of data loss. That's why we need to fix ( HDFS-3701 and ( HDFS-3702 or/and HDFS-3705 or/and HDFS-3703 )). For reducing the connect timeout, are you speaking about dfs.socket.timeout? Is there a way to change it for connect timeout only?
          Hide
          atm Aaron T. Myers added a comment -

          For reducing the connect timeout, are you speaking about dfs.socket.timeout? Is there a way to change it for connect timeout only?

          Yes, I was referring to dfs.client.socket-timeout in trunk / 2.x. You can't presently set a connection timeout separately from the read timeout, though it's not a bad idea. You can presently set a separate read or write timeout.

          Show
          atm Aaron T. Myers added a comment - For reducing the connect timeout, are you speaking about dfs.socket.timeout? Is there a way to change it for connect timeout only? Yes, I was referring to dfs.client.socket-timeout in trunk / 2.x. You can't presently set a connection timeout separately from the read timeout, though it's not a bad idea. You can presently set a separate read or write timeout.
          Hide
          devaraj Devaraj Das added a comment -

          A workaround for this in the current codebase is to use the favored node API (HDFS-2576). For example, we could choose some node on the same rack as the first replica in the list of hosts passed to the create API.

          Show
          devaraj Devaraj Das added a comment - A workaround for this in the current codebase is to use the favored node API ( HDFS-2576 ). For example, we could choose some node on the same rack as the first replica in the list of hosts passed to the create API.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          I have created a patch that provides one flag CreateFlag#AVOID_LOCAL_COPY. When open DFSOutputStream with AVOID_LOCAL_COPY flag, DFSOutputStream will ask the NameNode for the DataNodes on the same box and put it / them into its excluded nodes list. As a result, written blocks will be advised to avoid the same box as the client.

          Show
          eddyxu Lei (Eddy) Xu added a comment - I have created a patch that provides one flag CreateFlag#AVOID_LOCAL_COPY . When open DFSOutputStream with AVOID_LOCAL_COPY flag, DFSOutputStream will ask the NameNode for the DataNodes on the same box and put it / them into its excluded nodes list. As a result, written blocks will be advised to avoid the same box as the client.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12682352/HDFS-3702.000.patch
          against trunk revision 79301e8.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.balancer.TestBalancer
          org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
          org.apache.hadoop.hdfs.server.blockmanagement.TestHost2NodesMap

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8779//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8779//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12682352/HDFS-3702.000.patch against trunk revision 79301e8. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancer org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup org.apache.hadoop.hdfs.server.blockmanagement.TestHost2NodesMap +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8779//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8779//console This message is automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Update the patch to fix test failures.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Update the patch to fix test failures.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12682745/HDFS-3702.001.patch
          against trunk revision eb4045e.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The test build failed in hadoop-hdfs-project/hadoop-hdfs

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8794//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8794//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12682745/HDFS-3702.001.patch against trunk revision eb4045e. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The test build failed in hadoop-hdfs-project/hadoop-hdfs +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8794//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8794//console This message is automatically generated.
          Hide
          stack stack added a comment -

          Nice. Thanks for picking this up Lei (Eddy) Xu again.

          nit: why have the temporary tempExcluded variable in below?

          1369 DatanodeInfo[] tempExcluded = ObjectArrays.concat(
          1370 excluded, excludedLocalNodes, DatanodeInfo.class);
          1371 excluded = tempExcluded;

          Would be coolio if at least a trace logging I could enable for a few seconds to ensure this feature was working properly when enabled?

          Are there other cases in the codebase that you know of where the DN NetUtils.getHostname() works as a key for finding the datanode info in the NN?

          As I read it, we are doing an extra trip to the NN when we open a DFSOS with AVOID_LOCAL_COPY set? No means of doing doing this once only or once every so often rather than on each time (not a blocker for the hbase case I'd say but would be good to avoid or mitigate if possible).

          Thanks again for working on this one.

          Show
          stack stack added a comment - Nice. Thanks for picking this up Lei (Eddy) Xu again. nit: why have the temporary tempExcluded variable in below? 1369 DatanodeInfo[] tempExcluded = ObjectArrays.concat( 1370 excluded, excludedLocalNodes, DatanodeInfo.class); 1371 excluded = tempExcluded; Would be coolio if at least a trace logging I could enable for a few seconds to ensure this feature was working properly when enabled? Are there other cases in the codebase that you know of where the DN NetUtils.getHostname() works as a key for finding the datanode info in the NN? As I read it, we are doing an extra trip to the NN when we open a DFSOS with AVOID_LOCAL_COPY set? No means of doing doing this once only or once every so often rather than on each time (not a blocker for the hbase case I'd say but would be good to avoid or mitigate if possible). Thanks again for working on this one.
          Hide
          nkeywal Nicolas Liochon added a comment -

          It's great. Thanks, Lei.
          That should be a 100% win for HBase, because since HBASE-6435 we avoid going to the machine that wrote the hlog file. HBase will need to set the replication factor to 2 however.

          Show
          nkeywal Nicolas Liochon added a comment - It's great. Thanks, Lei. That should be a 100% win for HBase, because since HBASE-6435 we avoid going to the machine that wrote the hlog file. HBase will need to set the replication factor to 2 however.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          stack and Nicolas Liochon Thanks for your reviews. Your feedbacks help a lot!

          I have updated the patch to:

          1. addressed comments and added logs.
          2. Are there other cases in the codebase that you know of where the DN NetUtils.getHostname() works as a key for finding the datanode info in the NN?

            I changed it to use all local network interface IP addresses as keys to find datanode.

          3. we are doing an extra trip to the NN when we open a DFSOS with AVOID_LOCAL_COPY set?

            It now only asks NN when creating DFSClient, which is presumably a relatively rare operation. stack Could you help to verify this assumption? Thanks!

          Show
          eddyxu Lei (Eddy) Xu added a comment - stack and Nicolas Liochon Thanks for your reviews. Your feedbacks help a lot! I have updated the patch to: addressed comments and added logs. Are there other cases in the codebase that you know of where the DN NetUtils.getHostname() works as a key for finding the datanode info in the NN? I changed it to use all local network interface IP addresses as keys to find datanode. we are doing an extra trip to the NN when we open a DFSOS with AVOID_LOCAL_COPY set? It now only asks NN when creating DFSClient, which is presumably a relatively rare operation. stack Could you help to verify this assumption? Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683452/HDFS-3702.002.patch
          against trunk revision 8caf537.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8828//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8828//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683452/HDFS-3702.002.patch against trunk revision 8caf537. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8828//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8828//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683452/HDFS-3702.002.patch
          against trunk revision a16bfff.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10056//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683452/HDFS-3702.002.patch against trunk revision a16bfff. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10056//console This message is automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Re work the patch against the trunk.

          • Also changed DFSClient to cache local DN list, so only one NN rpc for excluded local list when DFSClient creates the first DFSOutputStream with CreateFlag.NO_LOCAL_WRITE.
          Show
          eddyxu Lei (Eddy) Xu added a comment - Re work the patch against the trunk . Also changed DFSClient to cache local DN list, so only one NN rpc for excluded local list when DFSClient creates the first DFSOutputStream with CreateFlag.NO_LOCAL_WRITE .
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi Eddy, one high-level question, any reason we aren't taking the same approach as HDFS-4946 and hooking this into BlockPlacementPolicy? It'd be nice to have this logic centralized, and BPP handles all the fallback logic in case we do need to write locally.

          Show
          andrew.wang Andrew Wang added a comment - Hi Eddy, one high-level question, any reason we aren't taking the same approach as HDFS-4946 and hooking this into BlockPlacementPolicy? It'd be nice to have this logic centralized, and BPP handles all the fallback logic in case we do need to write locally.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          0 docker 0m 0s Docker command '/usr/bin/docker' not found/broken. Disabling docker.
          0 findbugs 0m 0s Findbugs executables are not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 3s Maven dependency ordering for branch
          -1 mvninstall 0m 2s root in trunk failed.
          -1 compile 0m 2s root in trunk failed.
          +1 checkstyle 0m 2s trunk passed
          -1 mvnsite 0m 9s hadoop-common in trunk failed.
          -1 mvnsite 0m 2s hadoop-hdfs in trunk failed.
          -1 mvnsite 0m 2s hadoop-hdfs-client in trunk failed.
          -1 mvneclipse 0m 2s hadoop-common in trunk failed.
          -1 mvneclipse 0m 2s hadoop-hdfs in trunk failed.
          -1 mvneclipse 0m 2s hadoop-hdfs-client in trunk failed.
          -1 javadoc 0m 8s hadoop-common in trunk failed.
          -1 javadoc 0m 2s hadoop-hdfs in trunk failed.
          -1 javadoc 0m 2s hadoop-hdfs-client in trunk failed.
          0 mvndep 0m 3s Maven dependency ordering for patch
          -1 mvninstall 0m 2s hadoop-common in the patch failed.
          -1 mvninstall 0m 2s hadoop-hdfs in the patch failed.
          -1 mvninstall 0m 2s hadoop-hdfs-client in the patch failed.
          -1 compile 0m 11s root in the patch failed.
          -1 cc 0m 11s root in the patch failed.
          -1 javac 0m 11s root in the patch failed.
          +1 checkstyle 0m 2s the patch passed
          -1 mvnsite 0m 2s hadoop-common in the patch failed.
          -1 mvnsite 0m 2s hadoop-hdfs in the patch failed.
          -1 mvnsite 0m 2s hadoop-hdfs-client in the patch failed.
          -1 mvneclipse 0m 2s hadoop-common in the patch failed.
          -1 mvneclipse 0m 2s hadoop-hdfs in the patch failed.
          -1 mvneclipse 0m 3s hadoop-hdfs-client in the patch failed.
          -1 whitespace 0m 0s The patch has 956 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 whitespace 0m 24s The patch has 9600 line(s) with tabs.
          -1 javadoc 0m 2s hadoop-common in the patch failed.
          -1 javadoc 0m 2s hadoop-hdfs in the patch failed.
          -1 javadoc 0m 2s hadoop-hdfs-client in the patch failed.
          -1 unit 0m 2s hadoop-common in the patch failed.
          -1 unit 0m 2s hadoop-hdfs in the patch failed.
          -1 unit 0m 2s hadoop-hdfs-client in the patch failed.
          0 asflicense 0m 3s ASF License check generated no output?
          7m 57s



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12789091/HDFS-3702.003.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/patchprocess/apache-yetus-51a4bea/precommit/personality/hadoop.sh
          git revision trunk / 66289a3
          Default Java
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-common-project_hadoop-common.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs-client.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-common-project_hadoop-common.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt
          cc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-common-project_hadoop-common.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs-client.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/whitespace-tabs.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-common-project_hadoop-common.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14572/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14572/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. 0 docker 0m 0s Docker command '/usr/bin/docker' not found/broken. Disabling docker. 0 findbugs 0m 0s Findbugs executables are not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 3s Maven dependency ordering for branch -1 mvninstall 0m 2s root in trunk failed. -1 compile 0m 2s root in trunk failed. +1 checkstyle 0m 2s trunk passed -1 mvnsite 0m 9s hadoop-common in trunk failed. -1 mvnsite 0m 2s hadoop-hdfs in trunk failed. -1 mvnsite 0m 2s hadoop-hdfs-client in trunk failed. -1 mvneclipse 0m 2s hadoop-common in trunk failed. -1 mvneclipse 0m 2s hadoop-hdfs in trunk failed. -1 mvneclipse 0m 2s hadoop-hdfs-client in trunk failed. -1 javadoc 0m 8s hadoop-common in trunk failed. -1 javadoc 0m 2s hadoop-hdfs in trunk failed. -1 javadoc 0m 2s hadoop-hdfs-client in trunk failed. 0 mvndep 0m 3s Maven dependency ordering for patch -1 mvninstall 0m 2s hadoop-common in the patch failed. -1 mvninstall 0m 2s hadoop-hdfs in the patch failed. -1 mvninstall 0m 2s hadoop-hdfs-client in the patch failed. -1 compile 0m 11s root in the patch failed. -1 cc 0m 11s root in the patch failed. -1 javac 0m 11s root in the patch failed. +1 checkstyle 0m 2s the patch passed -1 mvnsite 0m 2s hadoop-common in the patch failed. -1 mvnsite 0m 2s hadoop-hdfs in the patch failed. -1 mvnsite 0m 2s hadoop-hdfs-client in the patch failed. -1 mvneclipse 0m 2s hadoop-common in the patch failed. -1 mvneclipse 0m 2s hadoop-hdfs in the patch failed. -1 mvneclipse 0m 3s hadoop-hdfs-client in the patch failed. -1 whitespace 0m 0s The patch has 956 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 whitespace 0m 24s The patch has 9600 line(s) with tabs. -1 javadoc 0m 2s hadoop-common in the patch failed. -1 javadoc 0m 2s hadoop-hdfs in the patch failed. -1 javadoc 0m 2s hadoop-hdfs-client in the patch failed. -1 unit 0m 2s hadoop-common in the patch failed. -1 unit 0m 2s hadoop-hdfs in the patch failed. -1 unit 0m 2s hadoop-hdfs-client in the patch failed. 0 asflicense 0m 3s ASF License check generated no output? 7m 57s Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12789091/HDFS-3702.003.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/patchprocess/apache-yetus-51a4bea/precommit/personality/hadoop.sh git revision trunk / 66289a3 Default Java mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-common-project_hadoop-common.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs-client.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-common-project_hadoop-common.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt compile https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt cc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-common-project_hadoop-common.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs.txt mvneclipse https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-mvneclipse-hadoop-hdfs-project_hadoop-hdfs-client.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/whitespace-tabs.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-common-project_hadoop-common.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14572/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14572/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14572/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hey, Andrew Wang

          Thanks for the suggestions. I change the patch to address your comments. The main idea is that the client passes a NO_LOCAL_WRITE flag to BlockPlacementPolicy, which tries excluding the local DN first. If no enough DN can be obtained, then BlockPlacementPolicy falls back to the normal procedure.

          Would you take another look?

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hey, Andrew Wang Thanks for the suggestions. I change the patch to address your comments. The main idea is that the client passes a NO_LOCAL_WRITE flag to BlockPlacementPolicy , which tries excluding the local DN first. If no enough DN can be obtained, then BlockPlacementPolicy falls back to the normal procedure. Would you take another look?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 32s Maven dependency ordering for branch
          +1 mvninstall 6m 40s trunk passed
          +1 compile 6m 14s trunk passed with JDK v1.8.0_74
          +1 compile 6m 47s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 13s trunk passed
          +1 mvnsite 2m 22s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 2s trunk passed
          +1 javadoc 2m 16s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 58s the patch passed
          +1 compile 5m 45s the patch passed with JDK v1.8.0_74
          +1 cc 5m 45s the patch passed
          +1 javac 5m 45s the patch passed
          +1 compile 6m 43s the patch passed with JDK v1.7.0_95
          +1 cc 6m 43s the patch passed
          +1 javac 6m 43s the patch passed
          -1 checkstyle 1m 13s root: patch generated 20 new + 684 unchanged - 7 fixed = 704 total (was 691)
          +1 mvnsite 2m 18s the patch passed
          +1 mvneclipse 0m 37s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 5m 52s the patch passed
          -1 javadoc 4m 48s hadoop-common-project_hadoop-common-jdk1.8.0_74 with JDK v1.8.0_74 generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1)
          +1 javadoc 2m 19s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 25s hadoop-common-project_hadoop-common-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 13 unchanged - 0 fixed = 19 total (was 13)
          +1 javadoc 3m 16s the patch passed with JDK v1.7.0_95
          -1 unit 6m 38s hadoop-common in the patch failed with JDK v1.8.0_74.
          +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 55m 34s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 7m 18s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 53m 27s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          192m 20s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
            hadoop.hdfs.TestHFlush
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.server.balancer.TestBalancer



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792138/HDFS-3702.004.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux d30bc1e45d56 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 55f73a1
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-common-project_hadoop-common-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          javadoc hadoop-common-project_hadoop-common-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14749/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14749/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 32s Maven dependency ordering for branch +1 mvninstall 6m 40s trunk passed +1 compile 6m 14s trunk passed with JDK v1.8.0_74 +1 compile 6m 47s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 13s trunk passed +1 mvnsite 2m 22s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 2s trunk passed +1 javadoc 2m 16s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 58s the patch passed +1 compile 5m 45s the patch passed with JDK v1.8.0_74 +1 cc 5m 45s the patch passed +1 javac 5m 45s the patch passed +1 compile 6m 43s the patch passed with JDK v1.7.0_95 +1 cc 6m 43s the patch passed +1 javac 6m 43s the patch passed -1 checkstyle 1m 13s root: patch generated 20 new + 684 unchanged - 7 fixed = 704 total (was 691) +1 mvnsite 2m 18s the patch passed +1 mvneclipse 0m 37s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 5m 52s the patch passed -1 javadoc 4m 48s hadoop-common-project_hadoop-common-jdk1.8.0_74 with JDK v1.8.0_74 generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) +1 javadoc 2m 19s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 25s hadoop-common-project_hadoop-common-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 13 unchanged - 0 fixed = 19 total (was 13) +1 javadoc 3m 16s the patch passed with JDK v1.7.0_95 -1 unit 6m 38s hadoop-common in the patch failed with JDK v1.8.0_74. +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 55m 34s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 7m 18s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 53m 27s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 192m 20s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.hdfs.TestHFlush JDK v1.7.0_95 Failed junit tests hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.server.balancer.TestBalancer Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792138/HDFS-3702.004.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux d30bc1e45d56 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 55f73a1 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-common-project_hadoop-common-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt javadoc hadoop-common-project_hadoop-common-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14749/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14749/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          nkeywal Nicolas Liochon added a comment -

          It's really really great to see this progressing.
          For a write-ahead-log: less writes, less disk flushes. And for a single region server failure: no time spent going to a dead node for the recoverLease step.
          I had a quick look at the patch, it seems ok to me, but you guys know this code better than me.

          Show
          nkeywal Nicolas Liochon added a comment - It's really really great to see this progressing. For a write-ahead-log: less writes, less disk flushes. And for a single region server failure: no time spent going to a dead node for the recoverLease step. I had a quick look at the patch, it seems ok to me, but you guys know this code better than me.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Fix relevant checkstyles and javadoc warnings.

          Thanks a lot for the comments, Nicolas Liochon. Glad to help out.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Fix relevant checkstyles and javadoc warnings. Thanks a lot for the comments, Nicolas Liochon . Glad to help out.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hey Eddy, thanks for reworking, a few comments:

          • Is BlockPlacementFlag being used in hadoop-common? Seems like it should go in hadoop-hdfs instead.
          • Can we name BlockPlacementFlag "AddBlockFlag" instead? That's more future-proof, since it doesn't restrict us to just BPP-related flags.
          • Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946? i.e. where the preferLocalNode boolean is used. It'd be good to implement these two features the same way, though it does require threading the state all the way down.
          • Nit: ClientProtocol "advice" -> "advise", though this might change after renaming to AddBlockFlag.
          Show
          andrew.wang Andrew Wang added a comment - Hey Eddy, thanks for reworking, a few comments: Is BlockPlacementFlag being used in hadoop-common? Seems like it should go in hadoop-hdfs instead. Can we name BlockPlacementFlag "AddBlockFlag" instead? That's more future-proof, since it doesn't restrict us to just BPP-related flags. Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946 ? i.e. where the preferLocalNode boolean is used. It'd be good to implement these two features the same way, though it does require threading the state all the way down. Nit: ClientProtocol "advice" -> "advise", though this might change after renaming to AddBlockFlag.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Thanks much for the quick reviews, Andrew Wang.

          Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946?

          As we discussed offline, the sampling logic here is different to HDFS-4946. In HDFS-4946, it tries to obtain a local node at first, if not success, it randomly picks from the entire DN pool. So the chosen DN is still possible to be local node. However, this patch requires the chosen DN exclusively from the rest of the DN pool. So HDFS-4946 logic does not apply here, mostly due to its fallback code.

          Updated the patch to address the rest of the comments.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Thanks much for the quick reviews, Andrew Wang . Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946 ? As we discussed offline, the sampling logic here is different to HDFS-4946 . In HDFS-4946 , it tries to obtain a local node at first, if not success, it randomly picks from the entire DN pool. So the chosen DN is still possible to be local node. However, this patch requires the chosen DN exclusively from the rest of the DN pool. So HDFS-4946 logic does not apply here, mostly due to its fallback code. Updated the patch to address the rest of the comments.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for the clarification Eddy, sounds good, just some nitty doc things before the final commit, otherwise +1 pending:

          • ClientProtocol and AddBlockFlags, the javadoc still talks about block allocation flags and BlockManager, but really these are just generic AddBlock flags. Currently we only use them to pass to BPPDefault, but in the future the flags could be used for anything.
          • Same comment applies to name of variable allocFlags, rename to addBlockFlags to be more generic?
          Show
          andrew.wang Andrew Wang added a comment - Thanks for the clarification Eddy, sounds good, just some nitty doc things before the final commit, otherwise +1 pending: ClientProtocol and AddBlockFlags, the javadoc still talks about block allocation flags and BlockManager, but really these are just generic AddBlock flags. Currently we only use them to pass to BPPDefault, but in the future the flags could be used for anything. Same comment applies to name of variable allocFlags , rename to addBlockFlags to be more generic?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 39s trunk passed
          +1 compile 6m 30s trunk passed with JDK v1.8.0_74
          +1 compile 6m 49s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 10s trunk passed
          +1 mvnsite 2m 20s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          +1 findbugs 5m 13s trunk passed
          +1 javadoc 2m 22s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 14s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 2m 0s the patch passed
          +1 compile 6m 5s the patch passed with JDK v1.8.0_74
          +1 cc 6m 5s the patch passed
          +1 javac 6m 5s the patch passed
          +1 compile 6m 45s the patch passed with JDK v1.7.0_95
          +1 cc 6m 45s the patch passed
          +1 javac 6m 45s the patch passed
          -1 checkstyle 1m 11s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691)
          +1 mvnsite 2m 19s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 5m 50s the patch passed
          -1 javadoc 4m 46s hadoop-common-project_hadoop-common-jdk1.8.0_74 with JDK v1.8.0_74 generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1)
          +1 javadoc 2m 18s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 18s hadoop-common-project_hadoop-common-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 13 unchanged - 0 fixed = 19 total (was 13)
          +1 javadoc 3m 11s the patch passed with JDK v1.7.0_95
          +1 unit 6m 57s hadoop-common in the patch passed with JDK v1.8.0_74.
          +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 59m 3s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 7m 33s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 58m 31s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 27s Patch does not generate ASF License warnings.
          201m 59s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
            hadoop.hdfs.TestHFlush
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792334/HDFS-3702.005.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 4182c143d5aa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2e040d3
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-common-project_hadoop-common-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          javadoc hadoop-common-project_hadoop-common-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14760/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14760/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 39s trunk passed +1 compile 6m 30s trunk passed with JDK v1.8.0_74 +1 compile 6m 49s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 10s trunk passed +1 mvnsite 2m 20s trunk passed +1 mvneclipse 0m 41s trunk passed +1 findbugs 5m 13s trunk passed +1 javadoc 2m 22s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 14s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 2m 0s the patch passed +1 compile 6m 5s the patch passed with JDK v1.8.0_74 +1 cc 6m 5s the patch passed +1 javac 6m 5s the patch passed +1 compile 6m 45s the patch passed with JDK v1.7.0_95 +1 cc 6m 45s the patch passed +1 javac 6m 45s the patch passed -1 checkstyle 1m 11s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691) +1 mvnsite 2m 19s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 5m 50s the patch passed -1 javadoc 4m 46s hadoop-common-project_hadoop-common-jdk1.8.0_74 with JDK v1.8.0_74 generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) +1 javadoc 2m 18s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 18s hadoop-common-project_hadoop-common-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 13 unchanged - 0 fixed = 19 total (was 13) +1 javadoc 3m 11s the patch passed with JDK v1.7.0_95 +1 unit 6m 57s hadoop-common in the patch passed with JDK v1.8.0_74. +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 59m 3s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 7m 33s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 58m 31s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 27s Patch does not generate ASF License warnings. 201m 59s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes   hadoop.hdfs.TestHFlush JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792334/HDFS-3702.005.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 4182c143d5aa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2e040d3 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-common-project_hadoop-common-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt javadoc hadoop-common-project_hadoop-common-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14760/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14760/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14760/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 47s trunk passed
          +1 compile 6m 5s trunk passed with JDK v1.8.0_74
          +1 compile 6m 57s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 16s trunk passed
          +1 mvnsite 2m 30s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          +1 findbugs 5m 11s trunk passed
          +1 javadoc 2m 24s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 18s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 1s the patch passed
          +1 compile 6m 12s the patch passed with JDK v1.8.0_74
          +1 cc 6m 12s the patch passed
          +1 javac 6m 12s the patch passed
          +1 compile 6m 59s the patch passed with JDK v1.7.0_95
          +1 cc 6m 59s the patch passed
          +1 javac 6m 59s the patch passed
          -1 checkstyle 2m 4s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691)
          +1 mvnsite 2m 28s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 6m 1s the patch passed
          -1 javadoc 5m 2s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1)
          +1 javadoc 2m 26s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 39s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1)
          +1 javadoc 3m 17s the patch passed with JDK v1.7.0_95
          +1 unit 7m 19s hadoop-common in the patch passed with JDK v1.8.0_74.
          +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 75m 47s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 8m 12s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 10s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 77m 21s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 30s Patch does not generate ASF License warnings.
          240m 57s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.shortcircuit.TestShortCircuitCache



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792363/HDFS-3702.006.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 9cb58ddf0b6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2e040d3
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14763/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14763/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 47s trunk passed +1 compile 6m 5s trunk passed with JDK v1.8.0_74 +1 compile 6m 57s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 16s trunk passed +1 mvnsite 2m 30s trunk passed +1 mvneclipse 0m 41s trunk passed +1 findbugs 5m 11s trunk passed +1 javadoc 2m 24s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 18s trunk passed with JDK v1.7.0_95 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 1s the patch passed +1 compile 6m 12s the patch passed with JDK v1.8.0_74 +1 cc 6m 12s the patch passed +1 javac 6m 12s the patch passed +1 compile 6m 59s the patch passed with JDK v1.7.0_95 +1 cc 6m 59s the patch passed +1 javac 6m 59s the patch passed -1 checkstyle 2m 4s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691) +1 mvnsite 2m 28s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 6m 1s the patch passed -1 javadoc 5m 2s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1) +1 javadoc 2m 26s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 39s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1) +1 javadoc 3m 17s the patch passed with JDK v1.7.0_95 +1 unit 7m 19s hadoop-common in the patch passed with JDK v1.8.0_74. +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 75m 47s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 8m 12s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 10s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 77m 21s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 30s Patch does not generate ASF License warnings. 240m 57s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.hdfs.TestHFlush   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot JDK v1.7.0_95 Failed junit tests hadoop.hdfs.shortcircuit.TestShortCircuitCache Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792363/HDFS-3702.006.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 9cb58ddf0b6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2e040d3 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14763/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14763/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Updated the comments for ClientProtocol and AddBlockFlag.

          The test failures are not relevant. They can be reproduced in trunk.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Updated the comments for ClientProtocol and AddBlockFlag . The test failures are not relevant. They can be reproduced in trunk.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Lei (Eddy) Xu, given the size of the patch could you please summarize your approach. Perhaps a one page design note would be a good idea.

          Please hold off on committing this until then.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Lei (Eddy) Xu , given the size of the patch could you please summarize your approach. Perhaps a one page design note would be a good idea. Please hold off on committing this until then.
          Hide
          stack stack added a comment -

          Excellent. +1.

          Below are nits for if you make a new version of the patch:

          Be more forthright in the doc on NO_LOCAL_WRITE. Change "Advice the block not being written to the local DataNode which is on the same host as the client." to "Advise that a block replica NOT be written to the local DataNode where 'local' means the same host as the client is being run on."

          I suppose there has to be two declarations of the enum NO_LOCAL_WRITE; i.e. we have to do the convertion from CreateFlag. NO_LOCAL_WRITE to AddBlockFlag. NO_LOCAL_WRITE

          The protected EnumSet<AddBlockFlag> addBlockFlags() { method is an accessor? Should it be called getAddBlockFlags?

          Whats happening here:

          255	    if (!avoidLocalNode || results.size() < numOfReplicas) {
          256	      LOG.debug("Fallback to use the default block placement.");
          

          If < numOfReplicas we will start writing local? Add this to release note I'd say.

          Show
          stack stack added a comment - Excellent. +1. Below are nits for if you make a new version of the patch: Be more forthright in the doc on NO_LOCAL_WRITE. Change "Advice the block not being written to the local DataNode which is on the same host as the client." to "Advise that a block replica NOT be written to the local DataNode where 'local' means the same host as the client is being run on." I suppose there has to be two declarations of the enum NO_LOCAL_WRITE; i.e. we have to do the convertion from CreateFlag. NO_LOCAL_WRITE to AddBlockFlag. NO_LOCAL_WRITE The protected EnumSet<AddBlockFlag> addBlockFlags() { method is an accessor? Should it be called getAddBlockFlags? Whats happening here: 255 if (!avoidLocalNode || results.size() < numOfReplicas) { 256 LOG.debug( "Fallback to use the default block placement." ); If < numOfReplicas we will start writing local? Add this to release note I'd say.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hey, Arpit Agarwal . Here is the design doc.

          The basic idea is very simple: BlockPlacementPolicy will try to allocate replicas by putting local node into excluded node first, if not able to obtain sufficient replicas, fall back to normal way. The rest of this patch is changing the signatures of the related functions.

          Would it be OK? Thanks.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hey, Arpit Agarwal . Here is the design doc. The basic idea is very simple: BlockPlacementPolicy will try to allocate replicas by putting local node into excluded node first, if not able to obtain sufficient replicas, fall back to normal way. The rest of this patch is changing the signatures of the related functions. Would it be OK? Thanks.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Thanks a lot for the suggestions, stack. Updated the patch as you suggested.

          there has to be two declarations of the enum NO_LOCAL_WRITE

          Yes, CreateFlag.NO_LOCAL_WRITE is the one visible to users, which should be used by the applications like HBase. AddBlockFlag should be only used within HDFS, I think.

          Add this to release note I'd say.

          Done.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Thanks a lot for the suggestions, stack . Updated the patch as you suggested. there has to be two declarations of the enum NO_LOCAL_WRITE Yes, CreateFlag.NO_LOCAL_WRITE is the one visible to users, which should be used by the applications like HBase. AddBlockFlag should be only used within HDFS, I think. Add this to release note I'd say. Done.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 24s trunk passed
          +1 compile 5m 34s trunk passed with JDK v1.8.0_74
          +1 compile 6m 31s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 12s trunk passed
          +1 mvnsite 2m 18s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 3s trunk passed
          +1 javadoc 2m 17s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 1m 57s the patch passed
          +1 compile 6m 52s the patch passed with JDK v1.8.0_74
          +1 cc 6m 52s the patch passed
          +1 javac 6m 52s the patch passed
          +1 compile 7m 0s the patch passed with JDK v1.7.0_95
          +1 cc 7m 0s the patch passed
          +1 javac 7m 0s the patch passed
          -1 checkstyle 1m 9s root: patch generated 9 new + 685 unchanged - 7 fixed = 694 total (was 692)
          +1 mvnsite 2m 19s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 5m 47s the patch passed
          -1 javadoc 4m 43s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 2m 17s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 15s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 3m 11s the patch passed with JDK v1.7.0_95
          +1 unit 6m 53s hadoop-common in the patch passed with JDK v1.8.0_74.
          +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 57m 28s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 6m 59s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 52m 30s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          192m 51s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.hdfs.TestDFSUpgradeFromImage
            hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792555/HDFS-3702.007.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux c642f1fddb53 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 318c9b6
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14780/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14780/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 24s trunk passed +1 compile 5m 34s trunk passed with JDK v1.8.0_74 +1 compile 6m 31s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 12s trunk passed +1 mvnsite 2m 18s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 3s trunk passed +1 javadoc 2m 17s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 1m 57s the patch passed +1 compile 6m 52s the patch passed with JDK v1.8.0_74 +1 cc 6m 52s the patch passed +1 javac 6m 52s the patch passed +1 compile 7m 0s the patch passed with JDK v1.7.0_95 +1 cc 7m 0s the patch passed +1 javac 7m 0s the patch passed -1 checkstyle 1m 9s root: patch generated 9 new + 685 unchanged - 7 fixed = 694 total (was 692) +1 mvnsite 2m 19s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 5m 47s the patch passed -1 javadoc 4m 43s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 2m 17s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 15s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 3m 11s the patch passed with JDK v1.7.0_95 +1 unit 6m 53s hadoop-common in the patch passed with JDK v1.8.0_74. +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 57m 28s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 6m 59s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 52m 30s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 192m 51s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.hdfs.TestDFSUpgradeFromImage   hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792555/HDFS-3702.007.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux c642f1fddb53 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 318c9b6 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14780/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14780/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14780/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks Lei (Eddy) Xu. Before we get into specifics of this approach I want to mention that HDFS now supports storage policies. Was a storage policy based approach considered? It could simplify the changes to HDFS. You won't need any application changes to HBase since a cluster installer can set this policy ahead of time on the WAL root directory and it will take effect for all new blocks.

          I am also curious about the answer to Devaraj's question. HDFS-2576 was added specifically for HBase. Can it address your use case? This avoids any changes to HDFS.

          The design note is rather concise so it didn't answer my questions. The NameNode ignores this CreateFlag so it will only work for DFSClient users e.g. not for WebHDFS. That will be confusing to developers. We should also document the flag is advisory. Is it honored for appends? How does it affect block placement policy - is local rack still preferred for the first replica?

          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks Lei (Eddy) Xu . Before we get into specifics of this approach I want to mention that HDFS now supports storage policies. Was a storage policy based approach considered? It could simplify the changes to HDFS. You won't need any application changes to HBase since a cluster installer can set this policy ahead of time on the WAL root directory and it will take effect for all new blocks. I am also curious about the answer to Devaraj's question . HDFS-2576 was added specifically for HBase. Can it address your use case? This avoids any changes to HDFS. The design note is rather concise so it didn't answer my questions. The NameNode ignores this CreateFlag so it will only work for DFSClient users e.g. not for WebHDFS. That will be confusing to developers. We should also document the flag is advisory. Is it honored for appends? How does it affect block placement policy - is local rack still preferred for the first replica?
          Hide
          eddyxu Lei (Eddy) Xu added a comment - - edited

          Thanks, Arpit Agarwal.

          This patch shares many similarities with HDFS-4946, in the following ways:

          This patch is orthogonal to storage policy, original block placement policy (i.e., rack-aware policy). The storage policy, local rack and etc are all honored. This patch is basically adding one special case for excludeNodes. I'd say that the only thing it changes is for HDFS-4946, as an opposite case to HDFS-4946, but only for per-client / DFSOutputStream base. For the cases similar in this JIRA, using storage policy alone does not necessarily provide better data availability (i.e., Hbase still writes to local SSD).

          I am also curious about the answer to Devaraj's question. HDFS-2576 was added specifically for HBase. Can it address your use case?

          To some extend, HDFS-2576 needs each DFSClient have the rest of the cluster in the favoriteNodes to achieve the same purpose. It'd also raise question like: would holding a subset of ND in favoriteNodes affect the efficiency of data placement? or should DFSClient constantly refresh this list of nodes? A similar argument can be applied to HDFS-4946 as well.

          The NameNode ignores this CreateFlag.

          I am not sure that I understand this question. It is still BlockManager in NameNode making the final decision of block placement (please see my first point). CreateFlag is just a user visible flag to provide the hints. These (and future more) hints are sent to NameNode through ClientNamenodeProtocol RPCs and processed by NameNode.

          it will only work for DFSClient users e.g. not for WebHDFS.

          At this time, I am not certain that it will not work for WebHDFS. If that is the case, can we file a following JIRA to fix it once the basic function is in place?

          Is it honored for appends?

          No, it only works for new blocks.

          I hope that the above explanations can answer your questions, Arpit Agarwal. Looking forward to hear from you.

          Show
          eddyxu Lei (Eddy) Xu added a comment - - edited Thanks, Arpit Agarwal . This patch shares many similarities with HDFS-4946 , in the following ways: This patch is orthogonal to storage policy, original block placement policy (i.e., rack-aware policy). The storage policy, local rack and etc are all honored. This patch is basically adding one special case for excludeNodes . I'd say that the only thing it changes is for HDFS-4946 , as an opposite case to HDFS-4946 , but only for per-client / DFSOutputStream base. For the cases similar in this JIRA, using storage policy alone does not necessarily provide better data availability (i.e., Hbase still writes to local SSD). I am also curious about the answer to Devaraj's question. HDFS-2576 was added specifically for HBase. Can it address your use case? To some extend, HDFS-2576 needs each DFSClient have the rest of the cluster in the favoriteNodes to achieve the same purpose. It'd also raise question like: would holding a subset of ND in favoriteNodes affect the efficiency of data placement? or should DFSClient constantly refresh this list of nodes? A similar argument can be applied to HDFS-4946 as well. The NameNode ignores this CreateFlag. I am not sure that I understand this question. It is still BlockManager in NameNode making the final decision of block placement (please see my first point). CreateFlag is just a user visible flag to provide the hints . These (and future more) hints are sent to NameNode through ClientNamenodeProtocol RPCs and processed by NameNode. it will only work for DFSClient users e.g. not for WebHDFS. At this time, I am not certain that it will not work for WebHDFS. If that is the case, can we file a following JIRA to fix it once the basic function is in place? Is it honored for appends? No, it only works for new blocks. I hope that the above explanations can answer your questions, Arpit Agarwal . Looking forward to hear from you.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 19s Maven dependency ordering for branch
          +1 mvninstall 7m 38s trunk passed
          +1 compile 12m 6s trunk passed with JDK v1.8.0_74
          +1 compile 9m 58s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 11s trunk passed
          +1 mvnsite 2m 27s trunk passed
          +1 mvneclipse 0m 37s trunk passed
          +1 findbugs 5m 44s trunk passed
          +1 javadoc 2m 38s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 40s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 2m 3s the patch passed
          +1 compile 7m 39s the patch passed with JDK v1.8.0_74
          +1 cc 7m 39s the patch passed
          +1 javac 7m 39s the patch passed
          +1 compile 7m 26s the patch passed with JDK v1.7.0_95
          -1 cc 17m 22s root-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 18 unchanged - 2 fixed = 20 total (was 20)
          +1 cc 7m 26s the patch passed
          +1 javac 7m 26s the patch passed
          -1 checkstyle 1m 13s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691)
          +1 mvnsite 2m 22s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 6m 0s the patch passed
          -1 javadoc 5m 11s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 2m 39s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 51s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 3m 22s the patch passed with JDK v1.7.0_95
          +1 unit 13m 38s hadoop-common in the patch passed with JDK v1.8.0_74.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 71m 16s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 8m 32s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 62m 49s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          240m 28s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.blockmanagement.TestBlockManager
            hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
            hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
            hadoop.hdfs.TestDFSUpgradeFromImage
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestRollingUpgrade
            hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
            hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792573/HDFS-3702.008.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 53888b46302a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9a79b73
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          cc root-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-compile-cc-root-jdk1.7.0_95.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14781/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14781/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 19s Maven dependency ordering for branch +1 mvninstall 7m 38s trunk passed +1 compile 12m 6s trunk passed with JDK v1.8.0_74 +1 compile 9m 58s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 11s trunk passed +1 mvnsite 2m 27s trunk passed +1 mvneclipse 0m 37s trunk passed +1 findbugs 5m 44s trunk passed +1 javadoc 2m 38s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 40s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 2m 3s the patch passed +1 compile 7m 39s the patch passed with JDK v1.8.0_74 +1 cc 7m 39s the patch passed +1 javac 7m 39s the patch passed +1 compile 7m 26s the patch passed with JDK v1.7.0_95 -1 cc 17m 22s root-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 18 unchanged - 2 fixed = 20 total (was 20) +1 cc 7m 26s the patch passed +1 javac 7m 26s the patch passed -1 checkstyle 1m 13s root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691) +1 mvnsite 2m 22s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 6m 0s the patch passed -1 javadoc 5m 11s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 2m 39s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 51s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 3m 22s the patch passed with JDK v1.7.0_95 +1 unit 13m 38s hadoop-common in the patch passed with JDK v1.8.0_74. +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 71m 16s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 8m 32s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 62m 49s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 240m 28s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.blockmanagement.TestBlockManager   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks   hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.TestDFSUpgradeFromImage JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestRollingUpgrade   hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs   hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792573/HDFS-3702.008.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 53888b46302a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9a79b73 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 cc root-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-compile-cc-root-jdk1.7.0_95.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14781/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14781/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14781/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 36s trunk passed
          +1 compile 5m 54s trunk passed with JDK v1.8.0_74
          +1 compile 6m 44s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 14s trunk passed
          +1 mvnsite 2m 19s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 9s trunk passed
          +1 javadoc 2m 19s trunk passed with JDK v1.8.0_74
          +1 javadoc 3m 12s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 1m 58s the patch passed
          +1 compile 5m 50s the patch passed with JDK v1.8.0_74
          +1 cc 5m 50s the patch passed
          +1 javac 5m 50s the patch passed
          +1 compile 6m 43s the patch passed with JDK v1.7.0_95
          +1 cc 6m 43s the patch passed
          +1 javac 6m 43s the patch passed
          -1 checkstyle 1m 15s root: patch generated 9 new + 685 unchanged - 7 fixed = 694 total (was 692)
          +1 mvnsite 2m 20s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 5m 56s the patch passed
          -1 javadoc 4m 51s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 2m 20s the patch passed with JDK v1.8.0_74
          -1 javadoc 8m 24s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)
          +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95
          +1 unit 6m 54s hadoop-common in the patch passed with JDK v1.8.0_74.
          +1 unit 0m 53s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 55m 39s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          +1 unit 7m 11s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 54m 6s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          192m 48s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
            hadoop.hdfs.TestHFlush
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792573/HDFS-3702.008.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 59bf049ed643 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 79961ec
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt
          javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14782/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14782/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 36s trunk passed +1 compile 5m 54s trunk passed with JDK v1.8.0_74 +1 compile 6m 44s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 14s trunk passed +1 mvnsite 2m 19s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 9s trunk passed +1 javadoc 2m 19s trunk passed with JDK v1.8.0_74 +1 javadoc 3m 12s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 1m 58s the patch passed +1 compile 5m 50s the patch passed with JDK v1.8.0_74 +1 cc 5m 50s the patch passed +1 javac 5m 50s the patch passed +1 compile 6m 43s the patch passed with JDK v1.7.0_95 +1 cc 6m 43s the patch passed +1 javac 6m 43s the patch passed -1 checkstyle 1m 15s root: patch generated 9 new + 685 unchanged - 7 fixed = 694 total (was 692) +1 mvnsite 2m 20s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 5m 56s the patch passed -1 javadoc 4m 51s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 2m 20s the patch passed with JDK v1.8.0_74 -1 javadoc 8m 24s hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95 +1 unit 6m 54s hadoop-common in the patch passed with JDK v1.8.0_74. +1 unit 0m 53s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 55m 39s hadoop-hdfs in the patch failed with JDK v1.8.0_74. +1 unit 7m 11s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 54m 6s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 192m 48s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes   hadoop.hdfs.TestHFlush JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12792573/HDFS-3702.008.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 59bf049ed643 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 79961ec Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-checkstyle-root.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74.txt javadoc hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14782/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14782/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14782/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          stack stack added a comment -

          I am also curious about the answer to Devaraj's question. HDFS-2576 was added specifically for HBase. Can it address your use case? This avoids any changes to HDFS.

          Arpit Agarwal
          On the Devaraj Das question on why not HDFS-2576 comment from near on three years ago, the 'favored nodes' feature was never fully-plumbed in HBase so no one to my knowledge ever used it. While there are rumors that our brothers and sisters at Y! are in the process of reviving it, the original implementors of 'favored nodes', FB, now consider it a 'mistake' [1] and state they'll "...have a party when FB no longer has this operational nightmare. " Given this report, hbase community would be wary of going a 'favored nodes' route.

          IIUC, to make use of it in this case, the 'client' would have to have a NN-like-awareness of cluster members and pick placement as the NN would excluding localhost? It seems like a lot to ask of the client/user of dfsclient.

          1. https://issues.apache.org/jira/browse/HBASE-6721?focusedCommentId=14720273&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14720273

          Show
          stack stack added a comment - I am also curious about the answer to Devaraj's question. HDFS-2576 was added specifically for HBase. Can it address your use case? This avoids any changes to HDFS. Arpit Agarwal On the Devaraj Das question on why not HDFS-2576 comment from near on three years ago, the 'favored nodes' feature was never fully-plumbed in HBase so no one to my knowledge ever used it. While there are rumors that our brothers and sisters at Y! are in the process of reviving it, the original implementors of 'favored nodes', FB, now consider it a 'mistake' [1] and state they'll "...have a party when FB no longer has this operational nightmare. " Given this report, hbase community would be wary of going a 'favored nodes' route. IIUC, to make use of it in this case, the 'client' would have to have a NN-like-awareness of cluster members and pick placement as the NN would excluding localhost? It seems like a lot to ask of the client/user of dfsclient. 1. https://issues.apache.org/jira/browse/HBASE-6721?focusedCommentId=14720273&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14720273
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks stack, Lei (Eddy) Xu.

          It would be great if we can avoid one off createFile parameters. What do you think of per-target block placement policies as proposed in this comment e.g. set a custom placement policy for /hbase/.logs/. The implementation will be easier now that we have extended attributes.

          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks stack , Lei (Eddy) Xu . It would be great if we can avoid one off createFile parameters. What do you think of per-target block placement policies as proposed in this comment e.g. set a custom placement policy for /hbase/.logs/. The implementation will be easier now that we have extended attributes.
          Hide
          andrew.wang Andrew Wang added a comment -

          Since we're just adding a flag to the existing create flags enumset, it doesn't affect our API signature. Note there are no changes in FileSystem or DistributedFileSystem. It also doesn't involve any NN memory overhead, which is a nice bonus compared to a storage policy with xattrs.

          I also like this scheme also since it gives us a lot of flexibility at the application level. For example, applications like distcp or the httpfs and nfs gateway might always want this flag on (no matter the destination folder), to avoid data load imbalance. For HBase's WAL, it would give them the flexibility to redo their filesystem layout, for instance if all WALs no longer go in a single "/logs" directory.

          Overall, it feels a lot like Linux-y filesystem hints like fadvise / madvise, and a good use of flags.

          Show
          andrew.wang Andrew Wang added a comment - Since we're just adding a flag to the existing create flags enumset, it doesn't affect our API signature. Note there are no changes in FileSystem or DistributedFileSystem. It also doesn't involve any NN memory overhead, which is a nice bonus compared to a storage policy with xattrs. I also like this scheme also since it gives us a lot of flexibility at the application level. For example, applications like distcp or the httpfs and nfs gateway might always want this flag on (no matter the destination folder), to avoid data load imbalance. For HBase's WAL, it would give them the flexibility to redo their filesystem layout, for instance if all WALs no longer go in a single "/logs" directory. Overall, it feels a lot like Linux-y filesystem hints like fadvise / madvise, and a good use of flags.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Andrew, there is still much plumbing required to communicate the bit through to the placement policy. Path-based policies let HBase and other apps experiment with alternatives with no churn e.g. if avoidLocalNode doesn't work out as expected. Are we confident this is the last time HBase or any other app will need custom placement hints?

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Andrew, there is still much plumbing required to communicate the bit through to the placement policy. Path-based policies let HBase and other apps experiment with alternatives with no churn e.g. if avoidLocalNode doesn't work out as expected. Are we confident this is the last time HBase or any other app will need custom placement hints?
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi Arpit,

          We've got +1's from HBase developers like Stack and Nicolas, so from that perspective it sounds like they approve of the API in this patch (your "churn" concern?). Could you elaborate about your concern about adding more placement hints in the future? Since there's the flags enumset, it seems like we can keep using more flags for additional hints. Stack/nkeywal can comment more, but I think this is also one of their biggest block placement requests besides co-location of HFiles for a region, which will be a big effort for both of our projects.

          Show
          andrew.wang Andrew Wang added a comment - Hi Arpit, We've got +1's from HBase developers like Stack and Nicolas, so from that perspective it sounds like they approve of the API in this patch (your "churn" concern?). Could you elaborate about your concern about adding more placement hints in the future? Since there's the flags enumset, it seems like we can keep using more flags for additional hints. Stack/nkeywal can comment more, but I think this is also one of their biggest block placement requests besides co-location of HFiles for a region, which will be a big effort for both of our projects.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Sure, by churn I mean the code changes in HDFS to propagate hints from the client to BlockPlacementPolicy. Even with the enumset you cannot add new flags without HDFS changes.

          I think this is also one of their biggest block placement requests besides co-location of HFiles for a region, which will be a big effort for both of our projects.

          This is all the more reason to think of a general solution since a flag may not be able to capture this kind of hint.

          Show
          arpitagarwal Arpit Agarwal added a comment - Sure, by churn I mean the code changes in HDFS to propagate hints from the client to BlockPlacementPolicy. Even with the enumset you cannot add new flags without HDFS changes. I think this is also one of their biggest block placement requests besides co-location of HFiles for a region, which will be a big effort for both of our projects. This is all the more reason to think of a general solution since a flag may not be able to capture this kind of hint.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for clarifying Arpit!

          Unless block placement policy and storage policies are made totally pluggable, I think it's unlikely we'll ever be able to add new kinds of BPP hints without changes in HDFS. BPP is somewhat pluggable today, but out-of-tree implementations are pretty discouraged for maintenance reasons, and last I checked storage policies are still hardcoded.

          Regarding co-location, my point was that it's unlikely we can express the colocation constraints through any of our existing APIs, and it will require integration work by downstreams anyway. The scope of colocation is much larger than this JIRA though, so seems like something we can discuss further somewhere else.

          Perhaps most compelling, given that this is just a hint, we have the flexibility of turning it into a no-op later on if we get downstream feedback about the API. Sound reasonable?

          Show
          andrew.wang Andrew Wang added a comment - Thanks for clarifying Arpit! Unless block placement policy and storage policies are made totally pluggable, I think it's unlikely we'll ever be able to add new kinds of BPP hints without changes in HDFS. BPP is somewhat pluggable today, but out-of-tree implementations are pretty discouraged for maintenance reasons, and last I checked storage policies are still hardcoded. Regarding co-location, my point was that it's unlikely we can express the colocation constraints through any of our existing APIs, and it will require integration work by downstreams anyway. The scope of colocation is much larger than this JIRA though, so seems like something we can discuss further somewhere else. Perhaps most compelling, given that this is just a hint, we have the flexibility of turning it into a no-op later on if we get downstream feedback about the API. Sound reasonable?
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          How about adding the local node to the excludedNodes?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - How about adding the local node to the excludedNodes?
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Arpit Agarwal and Tsz Wo Nicholas Sze. Thanks for these useful suggestions.

          I had “per-block block placement hint” and “putting local node into the excludedNodess", in patch 001 and 002, respectively. But for the reasons of performance concerns and the capability of fallback, as mentioned in the previous comments, I changed the patch to the current solution, which re-uses the fallback code in BPP.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Arpit Agarwal and Tsz Wo Nicholas Sze . Thanks for these useful suggestions. I had “ per-block block placement hint” and “putting local node into the excludedNodess", in patch 001 and 002, respectively. But for the reasons of performance concerns and the capability of fallback, as mentioned in the previous comments, I changed the patch to the current solution, which re-uses the fallback code in BPP.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hey, Guys

          Based on stack and Andrew Wang's +1s, I will commit this by end of day if there is no further comment.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hey, Guys Based on stack and Andrew Wang 's +1s, I will commit this by end of day if there is no further comment.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Please do commit the patch yet. Some idea below.

          In DistributedFileSystem, we already have create(..) and append(..) methods to support favoredNodes. How about we also add a new parameter disfavoredNodes? It supports a more general API – we could set disfavoredNodes to one or more hosts. BPP can fallback to these nodes if the other nodes are unavailable.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Please do commit the patch yet. Some idea below. In DistributedFileSystem, we already have create(..) and append(..) methods to support favoredNodes. How about we also add a new parameter disfavoredNodes? It supports a more general API – we could set disfavoredNodes to one or more hosts. BPP can fallback to these nodes if the other nodes are unavailable.
          Hide
          stack stack added a comment -

          Arpit Agarwal You'd like to purge all of the CreateFlag parameters Arpit? CreateFlag seems to be how other filesystems do color on a particular creation and this patch was able to make use of it and save changing a bunch of method signatures. Seems kinda useful? And seems like we could get more flags on CreateFlag down the road (ASYNC?).

          What do you think of per-target block placement policies as proposed in this comment e.g. set a custom placement policy for /hbase/.logs/.

          Seems like a grand idea (then and now) being able to do it for a whole class of files based-off their location in HDFS. Would this be instead of this patches' decoration on CreateFlag? I'd suggest not. I like this patch. It gets us what we want nicely. No need for an admin operator to remember to set attributes on specific dirs (99% won't); we can just do the code change in hbase (and rip out the hacks we have had in place for years now that have been our workaround in the absence of this patch).

          Thanks

          Show
          stack stack added a comment - Arpit Agarwal You'd like to purge all of the CreateFlag parameters Arpit? CreateFlag seems to be how other filesystems do color on a particular creation and this patch was able to make use of it and save changing a bunch of method signatures. Seems kinda useful? And seems like we could get more flags on CreateFlag down the road (ASYNC?). What do you think of per-target block placement policies as proposed in this comment e.g. set a custom placement policy for /hbase/.logs/. Seems like a grand idea (then and now) being able to do it for a whole class of files based-off their location in HDFS. Would this be instead of this patches' decoration on CreateFlag? I'd suggest not. I like this patch. It gets us what we want nicely. No need for an admin operator to remember to set attributes on specific dirs (99% won't); we can just do the code change in hbase (and rip out the hacks we have had in place for years now that have been our workaround in the absence of this patch). Thanks
          Hide
          stack stack added a comment -

          Tsz Wo Nicholas Sze So, the suggestion is changing public signatures to add a new parameter (Or adding a new override where there are already 6)? For a client to make effective use of disfavoredNodes, they would have to figure the exact name the NN is using and volunteer it in this disfavoredNodes list? Or could they just write 'localhost' and let NN figure it out? Do you foresee any other use for this disfavoredNodes parameter other than for the exclusion of 'localnode'? Thanks Nicolas.

          Show
          stack stack added a comment - Tsz Wo Nicholas Sze So, the suggestion is changing public signatures to add a new parameter (Or adding a new override where there are already 6)? For a client to make effective use of disfavoredNodes, they would have to figure the exact name the NN is using and volunteer it in this disfavoredNodes list? Or could they just write 'localhost' and let NN figure it out? Do you foresee any other use for this disfavoredNodes parameter other than for the exclusion of 'localnode'? Thanks Nicolas.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > So, the suggestion is changing public signatures to add a new parameter (Or adding a new override where there are already 6)?

          For compatibility reason, we probably have to add a new override. For better usability, we may add a Builder.

          > For a client to make effective use of disfavoredNodes, they would have to figure the exact name the NN is using and volunteer it in this disfavoredNodes list? Or could they just write 'localhost' and let NN figure it out?

          We should support 'localhost' in the API. DFSClient or NN may replace 'localhost' with the corresponding name.

          > Do you foresee any other use for this disfavoredNodes parameter other than for the exclusion of 'localnode'?

          Yes, disfavoredNodes seems useful. For example, some application may want to distribute its files uniformly in a cluster. Then, it could specify the previously allocated DNs as the disfavoredNodes.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > So, the suggestion is changing public signatures to add a new parameter (Or adding a new override where there are already 6)? For compatibility reason, we probably have to add a new override. For better usability, we may add a Builder. > For a client to make effective use of disfavoredNodes, they would have to figure the exact name the NN is using and volunteer it in this disfavoredNodes list? Or could they just write 'localhost' and let NN figure it out? We should support 'localhost' in the API. DFSClient or NN may replace 'localhost' with the corresponding name. > Do you foresee any other use for this disfavoredNodes parameter other than for the exclusion of 'localnode'? Yes, disfavoredNodes seems useful. For example, some application may want to distribute its files uniformly in a cluster. Then, it could specify the previously allocated DNs as the disfavoredNodes.
          Hide
          andrew.wang Andrew Wang added a comment -

          stack from a downstream perspective, could you comment on the usability of providing node lists to this API? This always felt hacky to me, since ultimately the NN is the one who knows about the cluster state and DN names and block location constraints. My impression was that tracking this in HBase was onerous, and is part of why favored nodes fell out of favor.

          For example, some application may want to distribute its files uniformly in a cluster....

          The main reason for skew I've seen is the local writer case, which this patch attempts to address. It'll still bias to the local rack, but I doubt that'll be an issue in practice, and if it is we can also add another flag for fully random distribution.

          Show
          andrew.wang Andrew Wang added a comment - stack from a downstream perspective, could you comment on the usability of providing node lists to this API? This always felt hacky to me, since ultimately the NN is the one who knows about the cluster state and DN names and block location constraints. My impression was that tracking this in HBase was onerous, and is part of why favored nodes fell out of favor. For example, some application may want to distribute its files uniformly in a cluster.... The main reason for skew I've seen is the local writer case, which this patch attempts to address. It'll still bias to the local rack, but I doubt that'll be an issue in practice, and if it is we can also add another flag for fully random distribution.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > ... we can also add another flag for fully random distribution.

          It seems not a good idea to keep adding flag. BTW, fully random distribution is not the same as uniform distribution.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > ... we can also add another flag for fully random distribution. It seems not a good idea to keep adding flag. BTW, fully random distribution is not the same as uniform distribution.
          Hide
          stack stack added a comment -

          For uniform distribution of files over a cluster, I think users would prefer that DFSClient managed it for them (a new flag on CreateFlag?) rather than do calculation figuring how to populate favoredNodes and disfavoredNodes using imperfect knowledge of the cluster, something the NN will always do better at.

          Unless you have other possible uses, disfavoredNodes seems like a more intrusive and roundabout route – with its overrides, possible builders, and global interpretation of 'localhost' string – to the clean flag this patch carries?

          What you think Tsz Wo Nicholas Sze? Thanks Nicolas.

          Show
          stack stack added a comment - For uniform distribution of files over a cluster, I think users would prefer that DFSClient managed it for them (a new flag on CreateFlag?) rather than do calculation figuring how to populate favoredNodes and disfavoredNodes using imperfect knowledge of the cluster, something the NN will always do better at. Unless you have other possible uses, disfavoredNodes seems like a more intrusive and roundabout route – with its overrides, possible builders, and global interpretation of 'localhost' string – to the clean flag this patch carries? What you think Tsz Wo Nicholas Sze ? Thanks Nicolas.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > For uniform distribution of files over a cluster, I think users would prefer that DFSClient managed it for them (a new flag on CreateFlag?) ...

          How would DFSClient know which nodes are disfavored nodes? How could it enforce disfavored nodes?

          > ... disfavoredNodes seems like a more intrusive and roundabout route – with its overrides, possible builders, and global interpretation of 'localhost' string – to the clean flag this patch carries?

          I disagree. Since we already have favoredNodes, adding disfavoredNodes seems more natural than adding a flag.

          In addition, the new FileSystem CreateFlag does not look clean to me since it is too specific to HDFS. How would other FileSystems such as LocalFileSystem implement it?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > For uniform distribution of files over a cluster, I think users would prefer that DFSClient managed it for them (a new flag on CreateFlag?) ... How would DFSClient know which nodes are disfavored nodes? How could it enforce disfavored nodes? > ... disfavoredNodes seems like a more intrusive and roundabout route – with its overrides, possible builders, and global interpretation of 'localhost' string – to the clean flag this patch carries? I disagree. Since we already have favoredNodes, adding disfavoredNodes seems more natural than adding a flag. In addition, the new FileSystem CreateFlag does not look clean to me since it is too specific to HDFS. How would other FileSystems such as LocalFileSystem implement it?
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          No need for an admin operator to remember to set attributes on specific dirs (99% won't);

          Hi stack, the attribute could be set by an installer script or an API call at process startup. Since you agree pluggable policies are good to have eventually, CreateFlag becomes a stopgap. However this will take more time so if you think HBase needs a solution now, I'm -0. Thanks.

          AddBlockFlag should be tagged as @InterfaceAudience.Private if we proceed with the .008 patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - No need for an admin operator to remember to set attributes on specific dirs (99% won't); Hi stack , the attribute could be set by an installer script or an API call at process startup. Since you agree pluggable policies are good to have eventually, CreateFlag becomes a stopgap. However this will take more time so if you think HBase needs a solution now, I'm -0. Thanks. AddBlockFlag should be tagged as @InterfaceAudience.Private if we proceed with the .008 patch.
          Hide
          stack stack added a comment -

          How would DFSClient know which nodes are disfavored nodes? How could it enforce disfavored nodes?

          You postulated an application that wanted to '....distribute its files uniformly in a cluster.' I was just trying to suggest that users would prefer that HDFS would just do it for them. HDFS would know how to do it better being the arbiter of what is happening in the cluster. An application will do a poor job compared. 'distribute its files uniformly...' sounds like a good feature to implement with a block placement policy.

          Since we already have favoredNodes, adding disfavoredNodes seems more natural than adding a flag.

          As noted above at 'stack added a comment - 12/Mar/16 15:20', favoredNodes is an unexercised feature that has actually been disavowed by the originators of the idea, FB, because it proved broken in practice. I'd suggest we not build more atop a feature-under-review as adding disfavoredNodes would (or at least until we hear of successful use of favoredNodes – apparently our Y! are trying it).

          In addition, the new FileSystem CreateFlag does not look clean to me since it is too specific to HDFS. How would other FileSystems such as LocalFileSystem implement it?

          The flag added by the attached patch is qualified throughout as a 'hint'. When set against LFS, it'll just be ignored. No harm done. The 'hint' didn't take.

          If we went your suggested route and added a disfavoredNodes route, things get a bit interesting when hbase, say, passes localhost. What'll happen? Does the user now have to check the FS implementation type before they select DFSClient method to call?

          I don't think you are objecting to the passing of flags on create, given this seems pretty standard fare in FSs.

          Show
          stack stack added a comment - How would DFSClient know which nodes are disfavored nodes? How could it enforce disfavored nodes? You postulated an application that wanted to '....distribute its files uniformly in a cluster.' I was just trying to suggest that users would prefer that HDFS would just do it for them. HDFS would know how to do it better being the arbiter of what is happening in the cluster. An application will do a poor job compared. 'distribute its files uniformly...' sounds like a good feature to implement with a block placement policy. Since we already have favoredNodes, adding disfavoredNodes seems more natural than adding a flag. As noted above at 'stack added a comment - 12/Mar/16 15:20', favoredNodes is an unexercised feature that has actually been disavowed by the originators of the idea, FB, because it proved broken in practice. I'd suggest we not build more atop a feature-under-review as adding disfavoredNodes would (or at least until we hear of successful use of favoredNodes – apparently our Y! are trying it). In addition, the new FileSystem CreateFlag does not look clean to me since it is too specific to HDFS. How would other FileSystems such as LocalFileSystem implement it? The flag added by the attached patch is qualified throughout as a 'hint'. When set against LFS, it'll just be ignored. No harm done. The 'hint' didn't take. If we went your suggested route and added a disfavoredNodes route, things get a bit interesting when hbase, say, passes localhost. What'll happen? Does the user now have to check the FS implementation type before they select DFSClient method to call? I don't think you are objecting to the passing of flags on create, given this seems pretty standard fare in FSs.
          Hide
          stack stack added a comment -

          Hi stack, the attribute could be set by an installer script or an API call at process startup

          Arpit AgarwalThanks. Yeah, vendors could ensure installers set the attribute. There are a significant set of installs where HBase shows up post-HDFS install and/or where HBase does not have sufficient permissions to set attributes on HDFS. I don't know the percentage. Would be just easier all around if it could be managed internally by HBase so no need to get scripts and/or operators involved.

          ...so if you think HBase needs a solution now, ...

          Smile. The issue was opened in July 2012 so we not holding our breath (smile). Would be cool if we could ask HDFS to not write local. Anyone doing WAL-on-HDFS will appreciate this in HDFS.

          Thanks Arpit Agarwal

          Show
          stack stack added a comment - Hi stack, the attribute could be set by an installer script or an API call at process startup Arpit Agarwal Thanks. Yeah, vendors could ensure installers set the attribute. There are a significant set of installs where HBase shows up post-HDFS install and/or where HBase does not have sufficient permissions to set attributes on HDFS. I don't know the percentage. Would be just easier all around if it could be managed internally by HBase so no need to get scripts and/or operators involved. ...so if you think HBase needs a solution now, ... Smile. The issue was opened in July 2012 so we not holding our breath (smile). Would be cool if we could ask HDFS to not write local. Anyone doing WAL-on-HDFS will appreciate this in HDFS. Thanks Arpit Agarwal
          Hide
          stack stack added a comment -

          ...could you comment on the usability of providing node lists to this API?

          Usually nodes and NN can agree on what they call machines but we've all seen plenty of clusters where this is not so. Both HDFS and HBase have their own means of insulating themselves against dodgy named setups. These systems are not in alignment.

          My impression was that tracking this in HBase was onerous, and is part of why favored nodes fell out of favor.

          No. It was never fully plumbed in HBase (it was plumbed into a balancer that no one used and would not swap into place because the default was featureful). Regards the FB experience, we need to get them to do us a post-mortem.

          Show
          stack stack added a comment - ...could you comment on the usability of providing node lists to this API? Usually nodes and NN can agree on what they call machines but we've all seen plenty of clusters where this is not so. Both HDFS and HBase have their own means of insulating themselves against dodgy named setups. These systems are not in alignment. My impression was that tracking this in HBase was onerous, and is part of why favored nodes fell out of favor. No. It was never fully plumbed in HBase (it was plumbed into a balancer that no one used and would not swap into place because the default was featureful). Regards the FB experience, we need to get them to do us a post-mortem.
          Hide
          nkeywal Nicolas Liochon added a comment -

          The issue was opened in July 2012 so we not holding our breath

          If we're not holding our breath is also because we put a hack in HBase (HBASE-6435). However, this hack is not perfect and does not help on the write path (we write and flush 3 times while two would provide the same level of safety), and we still try to do a recoverLease on a dead node when there is a server crash.

          Yeah, vendors could ensure installers set the attribute.

          imho, it's not an optional behavior for HBase (compared to favoredNode which was supposed to be a power-user configuration only): out of the box, HBase WALs should be written to 2 remote nodes by default, and never to the local node. So it would be much better to have the right behavior without requiring any extra work, scripts to run or code to deploy on the hdfs namenode (it's too easy to mess things up).

          Show
          nkeywal Nicolas Liochon added a comment - The issue was opened in July 2012 so we not holding our breath If we're not holding our breath is also because we put a hack in HBase ( HBASE-6435 ). However, this hack is not perfect and does not help on the write path (we write and flush 3 times while two would provide the same level of safety), and we still try to do a recoverLease on a dead node when there is a server crash. Yeah, vendors could ensure installers set the attribute. imho, it's not an optional behavior for HBase (compared to favoredNode which was supposed to be a power-user configuration only): out of the box, HBase WALs should be written to 2 remote nodes by default, and never to the local node. So it would be much better to have the right behavior without requiring any extra work, scripts to run or code to deploy on the hdfs namenode (it's too easy to mess things up).
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          If the region server has write permissions on /hbase/.logs, which I assume it does, it should be able to set policies on that directory. The ability for administrators to do so upfront would be a nice benefit but not a must.

          Show
          arpitagarwal Arpit Agarwal added a comment - If the region server has write permissions on /hbase/.logs, which I assume it does, it should be able to set policies on that directory. The ability for administrators to do so upfront would be a nice benefit but not a must.
          Hide
          devaraj Devaraj Das added a comment -

          Just FYI - the balancer work is being tracked in HBASE-8549.

          Show
          devaraj Devaraj Das added a comment - Just FYI - the balancer work is being tracked in HBASE-8549 .
          Hide
          stack stack added a comment -

          If the region server has write permissions on /hbase/.logs, which I assume it does, it should be able to set policies on that directory.

          Makes sense Arpit Agarwal Thanks. We can mess with this stuff when/if an accommodating block policy shows up. Meantime, you still -0 on this patch going in in meantime?

          Tsz Wo Nicholas Sze You against commit still sir? @nkeywal reminds me of the price we are currently paying not being able to ask HDFS to avoid local replicas. Seems easy enough to revisit given the way this is implemented should favoredNodes stabilize, and then a subsequent disfavoredNodes facility. Thanks.

          Show
          stack stack added a comment - If the region server has write permissions on /hbase/.logs, which I assume it does, it should be able to set policies on that directory. Makes sense Arpit Agarwal Thanks. We can mess with this stuff when/if an accommodating block policy shows up. Meantime, you still -0 on this patch going in in meantime? Tsz Wo Nicholas Sze You against commit still sir? @nkeywal reminds me of the price we are currently paying not being able to ask HDFS to avoid local replicas. Seems easy enough to revisit given the way this is implemented should favoredNodes stabilize, and then a subsequent disfavoredNodes facility. Thanks.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Sorry that I still against the commit. In particular, I am very uncomfortable to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag since we cannot remove them once they are added to the public FileSystem API.

          I can live with the "no local write" feature instead of supporting disfavoredNodes. How about adding a boolean noLoaclWrite parameter to DistributedFileSystem.create(..)?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Sorry that I still against the commit. In particular, I am very uncomfortable to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag since we cannot remove them once they are added to the public FileSystem API. I can live with the "no local write" feature instead of supporting disfavoredNodes. How about adding a boolean noLoaclWrite parameter to DistributedFileSystem.create(..)?
          Hide
          stack stack added a comment -

          Tsz Wo Nicholas Sze

          Arpit Agarwal is -0 if....

          AddBlockFlag should be tagged as @InterfaceAudience.Private if we proceed with the .008 patch.

          ... and then what if CreateFlag.NO_LOCAL_WRITE was marked LimitedPrivate with HBase denoted as the consumer? Would that be sufficient accommodation of your concern?

          Show
          stack stack added a comment - Tsz Wo Nicholas Sze Arpit Agarwal is -0 if.... AddBlockFlag should be tagged as @InterfaceAudience.Private if we proceed with the .008 patch. ... and then what if CreateFlag.NO_LOCAL_WRITE was marked LimitedPrivate with HBase denoted as the consumer? Would that be sufficient accommodation of your concern?
          Hide
          stack stack added a comment -

          I am very uncomfortable to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag since we cannot remove them once they are added to the public FileSystem API.

          The AddBlockFlag would have @InterfaceAudience.Private so it is not being added to the public API.

          The CreateFlag.NO_LOCAL_WRITE is an advisory enum. Something has to be available in the API for users like HBase to pull on. This seems to be most minimal intrusion possible. Being a hint by nature, it'd be undoable.

          Thanks for your consideration Tsz Wo Nicholas Sze

          Show
          stack stack added a comment - I am very uncomfortable to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag since we cannot remove them once they are added to the public FileSystem API. The AddBlockFlag would have @InterfaceAudience.Private so it is not being added to the public API. The CreateFlag.NO_LOCAL_WRITE is an advisory enum. Something has to be available in the API for users like HBase to pull on. This seems to be most minimal intrusion possible. Being a hint by nature, it'd be undoable. Thanks for your consideration Tsz Wo Nicholas Sze
          Hide
          stack stack added a comment -

          And more....(getting an emotional).... a downstreamer is hampered spending unnecessary i/o and cpu for years now and the patch is being blocked because we'd add an enum to public API! Help us out mighty Tsz Wo Nicholas Sze! Thanks.

          Show
          stack stack added a comment - And more....(getting an emotional).... a downstreamer is hampered spending unnecessary i/o and cpu for years now and the patch is being blocked because we'd add an enum to public API! Help us out mighty Tsz Wo Nicholas Sze ! Thanks.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Updated the patch to mark AddBlockFlag to be Private. And it also marks CreateFlag.NO_LOCAL_WRITE as {{LimitedPrivate(

          {"HBase"}

          )}}.

          Thanks a lot for the great suggestions, stack, Andrew Wang, Nicolas Liochon, Arpit Agarwal, Tsz Wo Nicholas Sze!

          Show
          eddyxu Lei (Eddy) Xu added a comment - Updated the patch to mark AddBlockFlag to be Private . And it also marks CreateFlag.NO_LOCAL_WRITE as {{LimitedPrivate( {"HBase"} )}}. Thanks a lot for the great suggestions, stack , Andrew Wang , Nicolas Liochon , Arpit Agarwal , Tsz Wo Nicholas Sze !
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 25s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 1m 6s Maven dependency ordering for branch
          +1 mvninstall 11m 12s trunk passed
          +1 compile 15m 13s trunk passed with JDK v1.8.0_74
          +1 compile 11m 50s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 56s trunk passed
          +1 mvnsite 4m 0s trunk passed
          +1 mvneclipse 1m 10s trunk passed
          +1 findbugs 8m 0s trunk passed
          +1 javadoc 4m 46s trunk passed with JDK v1.8.0_74
          +1 javadoc 5m 45s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 23s Maven dependency ordering for patch
          +1 mvninstall 3m 15s the patch passed
          +1 compile 15m 22s the patch passed with JDK v1.8.0_74
          +1 cc 15m 22s the patch passed
          +1 javac 15m 22s the patch passed
          +1 compile 11m 41s the patch passed with JDK v1.7.0_95
          +1 cc 11m 41s the patch passed
          +1 javac 11m 41s the patch passed
          -1 checkstyle 1m 47s root: patch generated 9 new + 675 unchanged - 7 fixed = 684 total (was 682)
          +1 mvnsite 3m 54s the patch passed
          +1 mvneclipse 1m 9s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 8m 55s the patch passed
          +1 javadoc 4m 37s the patch passed with JDK v1.8.0_74
          +1 javadoc 5m 39s the patch passed with JDK v1.7.0_95
          -1 unit 12m 54s hadoop-common in the patch failed with JDK v1.8.0_74.
          +1 unit 1m 32s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 76m 43s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          -1 unit 10m 41s hadoop-common in the patch failed with JDK v1.7.0_95.
          +1 unit 1m 27s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 98m 41s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          -1 asflicense 0m 42s Patch generated 2 ASF License warnings.
          327m 35s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.ipc.TestRPCWaitForProxy
            hadoop.fs.shell.find.TestIname
            hadoop.fs.shell.find.TestPrint0
            hadoop.fs.shell.find.TestPrint
            hadoop.fs.shell.find.TestName
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.server.namenode.ha.TestEditLogTailer
            hadoop.hdfs.server.datanode.TestDataNodeUUID
            hadoop.hdfs.security.TestDelegationTokenForProxyUser
            hadoop.hdfs.server.namenode.TestNameNodeMXBean
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
          JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker
            org.apache.hadoop.hdfs.TestFileAppend3
            org.apache.hadoop.hdfs.TestLeaseRecovery
            org.apache.hadoop.hdfs.server.balancer.TestBalancer
            org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.server.datanode.TestDataNodeUUID
            hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
            hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
          JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794891/HDFS-3702.009.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux b065131476c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 0bfe5a0
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14903/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14903/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 25s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 1m 6s Maven dependency ordering for branch +1 mvninstall 11m 12s trunk passed +1 compile 15m 13s trunk passed with JDK v1.8.0_74 +1 compile 11m 50s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 56s trunk passed +1 mvnsite 4m 0s trunk passed +1 mvneclipse 1m 10s trunk passed +1 findbugs 8m 0s trunk passed +1 javadoc 4m 46s trunk passed with JDK v1.8.0_74 +1 javadoc 5m 45s trunk passed with JDK v1.7.0_95 0 mvndep 0m 23s Maven dependency ordering for patch +1 mvninstall 3m 15s the patch passed +1 compile 15m 22s the patch passed with JDK v1.8.0_74 +1 cc 15m 22s the patch passed +1 javac 15m 22s the patch passed +1 compile 11m 41s the patch passed with JDK v1.7.0_95 +1 cc 11m 41s the patch passed +1 javac 11m 41s the patch passed -1 checkstyle 1m 47s root: patch generated 9 new + 675 unchanged - 7 fixed = 684 total (was 682) +1 mvnsite 3m 54s the patch passed +1 mvneclipse 1m 9s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 8m 55s the patch passed +1 javadoc 4m 37s the patch passed with JDK v1.8.0_74 +1 javadoc 5m 39s the patch passed with JDK v1.7.0_95 -1 unit 12m 54s hadoop-common in the patch failed with JDK v1.8.0_74. +1 unit 1m 32s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 76m 43s hadoop-hdfs in the patch failed with JDK v1.8.0_74. -1 unit 10m 41s hadoop-common in the patch failed with JDK v1.7.0_95. +1 unit 1m 27s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 98m 41s hadoop-hdfs in the patch failed with JDK v1.7.0_95. -1 asflicense 0m 42s Patch generated 2 ASF License warnings. 327m 35s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.ipc.TestRPCWaitForProxy   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestPrint0   hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestName   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.namenode.ha.TestEditLogTailer   hadoop.hdfs.server.datanode.TestDataNodeUUID   hadoop.hdfs.security.TestDelegationTokenForProxyUser   hadoop.hdfs.server.namenode.TestNameNodeMXBean   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker   org.apache.hadoop.hdfs.TestFileAppend3   org.apache.hadoop.hdfs.TestLeaseRecovery   org.apache.hadoop.hdfs.server.balancer.TestBalancer   org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.server.datanode.TestDataNodeUUID   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport   hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794891/HDFS-3702.009.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux b065131476c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0bfe5a0 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14903/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/14903/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14903/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Test failures are not related. I ran all failure tests locally and all of them passed.

          Hi, Arpit Agarwal, Tsz Wo Nicholas Sze, stack, Andrew Wang, Nicolas Liochon Are you guys OK with the 009 patch? Please do let me know by EOD. Thanks a lot!

          Show
          eddyxu Lei (Eddy) Xu added a comment - Test failures are not related. I ran all failure tests locally and all of them passed. Hi, Arpit Agarwal , Tsz Wo Nicholas Sze , stack , Andrew Wang , Nicolas Liochon Are you guys OK with the 009 patch? Please do let me know by EOD. Thanks a lot!
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          stack, let's add a boolean noLoaclWrite to DistributedFileSystem or just reuse the new AddBlockFlag there. You know, once it is in FileSystem, it is forever.

          BTW, AddBlockFlag should be moved to o.a.h.hdfs package.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - stack , let's add a boolean noLoaclWrite to DistributedFileSystem or just reuse the new AddBlockFlag there. You know, once it is in FileSystem, it is forever. BTW, AddBlockFlag should be moved to o.a.h.hdfs package.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Also, it seems that there are no tests using the new CreateFlag.NO_LOCAL_WRITE. Is it true?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Also, it seems that there are no tests using the new CreateFlag.NO_LOCAL_WRITE. Is it true?
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          BTW, I am at the airport now and going to be traveling these few days. Please bear with me if I cannot reply promptly.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - BTW, I am at the airport now and going to be traveling these few days. Please bear with me if I cannot reply promptly.
          Hide
          stack stack added a comment -

          stack, let's add a boolean noLoaclWrite to DistributedFileSystem or just reuse the new AddBlockFlag there.

          On adding a flag to DFS, taking a look, it would be 'odd' given what is there currently, and adding a public method to set a hint for a particular operation only would be tough to explain to the reader of the API ("Why flag here... when create takes flags already..."). Then there is the fact that the user has to do

          if HDFS, then

          and if we are on GPFS, an FS supported by one of our committers, then it is

           if HDFS || GPFS....

          and so on.

          I think you also mean 'and' in the above rather than 'or'. AddBlockFlag is internal to HDFS and marked Private so...not useable by clients.... maybe you are talking of how it will be implemented. I'm not sure what you are suggesting here. Pardon me.

          You know, once it is in FileSystem, it is forever.

          I know that for the client to ask for a behavior that is not there presently, yes, FileSystem has to change. We are talking about a self-described advisory, not a required new operation of the underlying FS.

          Show
          stack stack added a comment - stack, let's add a boolean noLoaclWrite to DistributedFileSystem or just reuse the new AddBlockFlag there. On adding a flag to DFS, taking a look, it would be 'odd' given what is there currently, and adding a public method to set a hint for a particular operation only would be tough to explain to the reader of the API ("Why flag here... when create takes flags already..."). Then there is the fact that the user has to do if HDFS, then and if we are on GPFS, an FS supported by one of our committers, then it is if HDFS || GPFS.... and so on. I think you also mean 'and' in the above rather than 'or'. AddBlockFlag is internal to HDFS and marked Private so...not useable by clients.... maybe you are talking of how it will be implemented. I'm not sure what you are suggesting here. Pardon me. You know, once it is in FileSystem, it is forever. I know that for the client to ask for a behavior that is not there presently, yes, FileSystem has to change. We are talking about a self-described advisory, not a required new operation of the underlying FS.
          Hide
          stack stack added a comment -

          I skimmed #9 patch. Seems good to me other than issues Tsz Wo Nicholas Sze raises (we are using the AddBlockFlag rather than the client flag... and I think AddBlockFlag should be in hdfs as he suggests given your remark above on difference between client-facing flag and hdfs flag. Thanks.

          Show
          stack stack added a comment - I skimmed #9 patch. Seems good to me other than issues Tsz Wo Nicholas Sze raises (we are using the AddBlockFlag rather than the client flag... and I think AddBlockFlag should be in hdfs as he suggests given your remark above on difference between client-facing flag and hdfs flag. Thanks.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Meantime, you still -0 on this patch going in in meantime?

          Sorry I was out earlier this week. Still -0 on this approach.

          Show
          arpitagarwal Arpit Agarwal added a comment - Meantime, you still -0 on this patch going in in meantime? Sorry I was out earlier this week. Still -0 on this approach.
          Hide
          eddyxu Lei (Eddy) Xu added a comment - - edited

          Updated the patch to:

          • Move AddBlockFlag to o.a.h.hdfs
          • Add one end-to-end test to use CreateFlag.NO_LOCAL_WRITE directly.

          Tsz Wo Nicholas Sze Could the 010 patch be sufficient to address your concerns? Thanks!

          Show
          eddyxu Lei (Eddy) Xu added a comment - - edited Updated the patch to: Move AddBlockFlag to o.a.h.hdfs Add one end-to-end test to use CreateFlag.NO_LOCAL_WRITE directly. Tsz Wo Nicholas Sze Could the 010 patch be sufficient to address your concerns? Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 23s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 21 new or modified test files.
          0 mvndep 0m 22s Maven dependency ordering for branch
          +1 mvninstall 10m 8s trunk passed
          +1 compile 15m 19s trunk passed with JDK v1.8.0_74
          +1 compile 11m 46s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 53s trunk passed
          +1 mvnsite 3m 49s trunk passed
          +1 mvneclipse 1m 6s trunk passed
          +1 findbugs 7m 40s trunk passed
          +1 javadoc 4m 39s trunk passed with JDK v1.8.0_74
          +1 javadoc 5m 35s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 22s Maven dependency ordering for patch
          +1 mvninstall 3m 12s the patch passed
          +1 compile 15m 20s the patch passed with JDK v1.8.0_74
          -1 cc 18m 54s root-jdk1.8.0_74 with JDK v1.8.0_74 generated 2 new + 19 unchanged - 2 fixed = 21 total (was 21)
          +1 cc 15m 20s the patch passed
          +1 javac 15m 20s the patch passed
          +1 compile 11m 43s the patch passed with JDK v1.7.0_95
          -1 cc 30m 38s root-jdk1.7.0_95 with JDK v1.7.0_95 generated 4 new + 27 unchanged - 4 fixed = 31 total (was 31)
          +1 cc 11m 43s the patch passed
          +1 javac 11m 43s the patch passed
          -1 checkstyle 1m 49s root: patch generated 10 new + 675 unchanged - 7 fixed = 685 total (was 682)
          +1 mvnsite 3m 52s the patch passed
          +1 mvneclipse 1m 5s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 8m 50s the patch passed
          +1 javadoc 4m 34s the patch passed with JDK v1.8.0_74
          +1 javadoc 5m 43s the patch passed with JDK v1.7.0_95
          -1 unit 14m 10s hadoop-common in the patch failed with JDK v1.8.0_74.
          +1 unit 1m 53s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74.
          -1 unit 127m 50s hadoop-hdfs in the patch failed with JDK v1.8.0_74.
          -1 unit 12m 12s hadoop-common in the patch failed with JDK v1.7.0_95.
          +1 unit 1m 40s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 100m 59s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          -1 asflicense 0m 42s Patch generated 2 ASF License warnings.
          381m 21s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.ipc.TestRPCWaitForProxy
            hadoop.fs.shell.find.TestIname
            hadoop.ha.TestZKFailoverController
            hadoop.fs.shell.find.TestPrint0
            hadoop.security.ssl.TestReloadingX509TrustManager
            hadoop.fs.shell.find.TestPrint
            hadoop.fs.shell.find.TestName
            hadoop.hdfs.shortcircuit.TestShortCircuitCache
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.server.namenode.ha.TestHAAppend
            hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
            hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.server.namenode.ha.TestEditLogTailer
            hadoop.hdfs.server.datanode.TestDataNodeUUID
            hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
            hadoop.hdfs.security.TestDelegationTokenForProxyUser
            hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
          JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker
          JDK v1.7.0_95 Failed junit tests hadoop.ipc.TestRPCWaitForProxy
            hadoop.fs.shell.find.TestIname
            hadoop.fs.shell.find.TestPrint0
            hadoop.fs.shell.find.TestName
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.server.blockmanagement.TestBlockManager
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.TestDistributedFileSystem
          JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795270/HDFS-3702.010.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 76f740052fca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2e1d0ff
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          cc root-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-compile-cc-root-jdk1.8.0_74.txt
          cc root-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-compile-cc-root-jdk1.7.0_95.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14928/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14928/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 21 new or modified test files. 0 mvndep 0m 22s Maven dependency ordering for branch +1 mvninstall 10m 8s trunk passed +1 compile 15m 19s trunk passed with JDK v1.8.0_74 +1 compile 11m 46s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 53s trunk passed +1 mvnsite 3m 49s trunk passed +1 mvneclipse 1m 6s trunk passed +1 findbugs 7m 40s trunk passed +1 javadoc 4m 39s trunk passed with JDK v1.8.0_74 +1 javadoc 5m 35s trunk passed with JDK v1.7.0_95 0 mvndep 0m 22s Maven dependency ordering for patch +1 mvninstall 3m 12s the patch passed +1 compile 15m 20s the patch passed with JDK v1.8.0_74 -1 cc 18m 54s root-jdk1.8.0_74 with JDK v1.8.0_74 generated 2 new + 19 unchanged - 2 fixed = 21 total (was 21) +1 cc 15m 20s the patch passed +1 javac 15m 20s the patch passed +1 compile 11m 43s the patch passed with JDK v1.7.0_95 -1 cc 30m 38s root-jdk1.7.0_95 with JDK v1.7.0_95 generated 4 new + 27 unchanged - 4 fixed = 31 total (was 31) +1 cc 11m 43s the patch passed +1 javac 11m 43s the patch passed -1 checkstyle 1m 49s root: patch generated 10 new + 675 unchanged - 7 fixed = 685 total (was 682) +1 mvnsite 3m 52s the patch passed +1 mvneclipse 1m 5s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 8m 50s the patch passed +1 javadoc 4m 34s the patch passed with JDK v1.8.0_74 +1 javadoc 5m 43s the patch passed with JDK v1.7.0_95 -1 unit 14m 10s hadoop-common in the patch failed with JDK v1.8.0_74. +1 unit 1m 53s hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. -1 unit 127m 50s hadoop-hdfs in the patch failed with JDK v1.8.0_74. -1 unit 12m 12s hadoop-common in the patch failed with JDK v1.7.0_95. +1 unit 1m 40s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 100m 59s hadoop-hdfs in the patch failed with JDK v1.7.0_95. -1 asflicense 0m 42s Patch generated 2 ASF License warnings. 381m 21s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.ipc.TestRPCWaitForProxy   hadoop.fs.shell.find.TestIname   hadoop.ha.TestZKFailoverController   hadoop.fs.shell.find.TestPrint0   hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestName   hadoop.hdfs.shortcircuit.TestShortCircuitCache   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.namenode.ha.TestHAAppend   hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.server.namenode.ha.TestEditLogTailer   hadoop.hdfs.server.datanode.TestDataNodeUUID   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations   hadoop.hdfs.security.TestDelegationTokenForProxyUser   hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker JDK v1.7.0_95 Failed junit tests hadoop.ipc.TestRPCWaitForProxy   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestPrint0   hadoop.fs.shell.find.TestName   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.blockmanagement.TestBlockManager   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.TestDistributedFileSystem JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795270/HDFS-3702.010.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 76f740052fca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2e1d0ff Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 cc root-jdk1.8.0_74: https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-compile-cc-root-jdk1.8.0_74.txt cc root-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-compile-cc-root-jdk1.7.0_95.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14928/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/14928/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14928/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -
          • Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE.
          • Please remove the "Fallback to use the default block placement." debug message since it is not generally true – it may not be a fallback case.
          • When avoidLocalNode == false, the results array is initialized twice, new ArrayList<>(chosenStorage) is called twice. We should only create the new array once in this case

          Thanks..

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE. Please remove the "Fallback to use the default block placement." debug message since it is not generally true – it may not be a fallback case. When avoidLocalNode == false, the results array is initialized twice, new ArrayList<>(chosenStorage) is called twice. We should only create the new array once in this case Thanks..
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hey, Tsz Wo Nicholas Sze

          Thanks a lot for your good suggestions.

          Please remove the "Fallback to use the default block placement." debug message

          Done

          When avoidLocalNode == false, ... We should only create the new array once in this case.

          Done

          Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE.

          Should we agree that AddBlockFlag is an internal flag used within HDFS? In the meantime, CreateFlag.NO_LOCAL_WRITE is very similar to CreateFlag.LAZY_PERSIST in the way that

          • Both of them are hint for block placement. The actual file system implement can choose to support it or ignore it. Both flags are not necessary to be HDFS specific, i.e., some other distributed file systems (Lustre, or even Tachyon) can support this as well.

          Thanks!

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hey, Tsz Wo Nicholas Sze Thanks a lot for your good suggestions. Please remove the "Fallback to use the default block placement." debug message Done When avoidLocalNode == false, ... We should only create the new array once in this case. Done Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE. Should we agree that AddBlockFlag is an internal flag used within HDFS? In the meantime, CreateFlag.NO_LOCAL_WRITE is very similar to CreateFlag.LAZY_PERSIST in the way that Both of them are hint for block placement. The actual file system implement can choose to support it or ignore it. Both flags are not necessary to be HDFS specific, i.e., some other distributed file systems (Lustre, or even Tachyon) can support this as well. Thanks!
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          As mentioned before, once CreateFlag.NO_LOCAL_WRITE is added, we cannot remove later. So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - As mentioned before, once CreateFlag.NO_LOCAL_WRITE is added, we cannot remove later. So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good?
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          BTW, we should check if excludedNodes.contains(writer) is already true; otherwise, fallback does not help. Also, we may set results to null when results.size() < numOfReplicas so that the code can be simplified a little bit.

              boolean avoidLocalNode = addBlockFlags != null
                  && addBlockFlags.contains(AddBlockFlag.NO_LOCAL_WRITE)
                  && writer != null
                  && !excludedNodes.contains(writer);
              List<DatanodeStorageInfo> results = null;
              // Attempt to exclude local node if the client suggests so.
              if (avoidLocalNode) {
                results = new ArrayList<>(chosenStorage);
                Set<Node> excludedNodeCopy = new HashSet<>(excludedNodes);
                excludedNodeCopy.add(writer);
                localNode = chooseTarget(numOfReplicas, writer, excludedNodeCopy,
                    blocksize, maxNodesPerRack, results, avoidStaleNodes,
                    storagePolicy, EnumSet.noneOf(StorageType.class), results.isEmpty());
                if (results.size() < numOfReplicas) {
                  results = null; // not enough nodes; discard results and fall back
                }
              }
              if (results == null) {
                results = new ArrayList<>(chosenStorage);
                localNode = chooseTarget(numOfReplicas, writer, excludedNodes,
                    blocksize, maxNodesPerRack, results, avoidStaleNodes,
                    storagePolicy, EnumSet.noneOf(StorageType.class), results.isEmpty());
              }
          
          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - BTW, we should check if excludedNodes.contains(writer) is already true; otherwise, fallback does not help. Also, we may set results to null when results.size() < numOfReplicas so that the code can be simplified a little bit. boolean avoidLocalNode = addBlockFlags != null && addBlockFlags.contains(AddBlockFlag.NO_LOCAL_WRITE) && writer != null && !excludedNodes.contains(writer); List<DatanodeStorageInfo> results = null ; // Attempt to exclude local node if the client suggests so. if (avoidLocalNode) { results = new ArrayList<>(chosenStorage); Set<Node> excludedNodeCopy = new HashSet<>(excludedNodes); excludedNodeCopy.add(writer); localNode = chooseTarget(numOfReplicas, writer, excludedNodeCopy, blocksize, maxNodesPerRack, results, avoidStaleNodes, storagePolicy, EnumSet.noneOf(StorageType.class), results.isEmpty()); if (results.size() < numOfReplicas) { results = null ; // not enough nodes; discard results and fall back } } if (results == null ) { results = new ArrayList<>(chosenStorage); localNode = chooseTarget(numOfReplicas, writer, excludedNodes, blocksize, maxNodesPerRack, results, avoidStaleNodes, storagePolicy, EnumSet.noneOf(StorageType.class), results.isEmpty()); }
          Hide
          stack stack added a comment -

          So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good?

          Tsz Wo Nicholas Sze Isn't CreateFlag.NO_LOCAL_WRITE how this facility gets exposed to clients? If it is not present, how does the feature get exercised at all? Thanks.

          Show
          stack stack added a comment - So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good? Tsz Wo Nicholas Sze Isn't CreateFlag.NO_LOCAL_WRITE how this facility gets exposed to clients? If it is not present, how does the feature get exercised at all? Thanks.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment - - edited

          I suggest adding a new create(..) method to DistributedFileSystem, either with a new boolean or with the AddBlockFlag, in this JIRA so that the community can try out the feature. We may add the CreateFlag.NO_LOCAL_WRITE once the feature has been stabilized and we has decided that it is the right API.

          As you mentioned, the favoredNodes turned out to be a not so good idea. I am glad that it was not added to the FileSystem API at that time. Thanks.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - - edited I suggest adding a new create(..) method to DistributedFileSystem, either with a new boolean or with the AddBlockFlag, in this JIRA so that the community can try out the feature. We may add the CreateFlag.NO_LOCAL_WRITE once the feature has been stabilized and we has decided that it is the right API. As you mentioned, the favoredNodes turned out to be a not so good idea. I am glad that it was not added to the FileSystem API at that time. Thanks.
          Hide
          stack stack added a comment -

          I suggest adding a new create(..) method to DistributedFileSystem, either with a new boolean or with the AddBlockFlag, in this JIRA so that the community can try out the feature. We may add the CreateFlag.NO_LOCAL_WRITE once the feature has been stabilized and we has decided that it is the right API.

          Tell me more how this will process would work please Tsz Wo Nicholas Sze? IIUC, a downstream project, say HBase which already has an awful hack in place to try and simulate a poor-man's version of this feature, would via reflection, look first for the presence of this new create override IFF the implementation is HDFS (don't look if LocalFS or S3, etc.)? If HDFS and if present, we'd drop our hack and use the new method (via reflection). Later, after it is 'proven' that a feature, one that hbase has wanted for years now, has 'merit', we would then add a new path w/ more reflection (IFF the FS implementation is HDFS) that would use the NO_LOCAL_WRITE when it becomes available? (Would we remove the create override when the NO_LOCAL_WRITE FS hint gets added?) Are you suggesting that downstream projects do this?

          Regards favorednodes, thats an unfinished topic and of a different character to what is being suggested here as it added overrides rather than a 'hint' flag as this patch does here.

          Thanks Tsz Wo Nicholas Sze

          Show
          stack stack added a comment - I suggest adding a new create(..) method to DistributedFileSystem, either with a new boolean or with the AddBlockFlag, in this JIRA so that the community can try out the feature. We may add the CreateFlag.NO_LOCAL_WRITE once the feature has been stabilized and we has decided that it is the right API. Tell me more how this will process would work please Tsz Wo Nicholas Sze ? IIUC, a downstream project, say HBase which already has an awful hack in place to try and simulate a poor-man's version of this feature, would via reflection, look first for the presence of this new create override IFF the implementation is HDFS (don't look if LocalFS or S3, etc.)? If HDFS and if present, we'd drop our hack and use the new method (via reflection). Later, after it is 'proven' that a feature, one that hbase has wanted for years now, has 'merit', we would then add a new path w/ more reflection (IFF the FS implementation is HDFS) that would use the NO_LOCAL_WRITE when it becomes available? (Would we remove the create override when the NO_LOCAL_WRITE FS hint gets added?) Are you suggesting that downstream projects do this? Regards favorednodes, thats an unfinished topic and of a different character to what is being suggested here as it added overrides rather than a 'hint' flag as this patch does here. Thanks Tsz Wo Nicholas Sze
          Hide
          cmccabe Colin P. McCabe added a comment -

          So the proposal is to add a new API to DistributedFileSystem that would support NO_LOCAL_WRITE. Since we would want to continue to support all the existing options to create, it would look something like this:

            public FSDataOutputStream create(Path f,
                FsPermission permission,
                EnumSet<CreateFlag> flags,
                int bufferSize,
                short replication,
                long blockSize,
                Progressable progress,
                ChecksumOpt checksumOpt,
                boolean noLocalWrite) throws IOException {
          

          That's 9 different parameters, which seems like a definite code smell.

          Is it really worth adding this ugly API to avoid introducing a new CreateFlag? Adding a new CreateFlag seems harmless to me. If CreateFlag.NO_LOCAL_WRITE doesn't work out, we will stick a @deprecated next to it and start ignoring it. It's still better than adding a 9-argument function which requires a typecast to use. We already have a lot of CreateFlags that are HDFS-specific such as LAZY_PERSIST, SYNC_BLOCK, and APPEND_NEWBLOCK.

          Also keep in mind that once we add this ugly 9-argument API to DistributedFileSystem, we won't be able to remove it. In order for HBase to use it, it has to be public and part of an official Apache release of Hadoop. By our own API policies, it becomes permanent at that point. Why is HDFS so unkind to our downstream projects?

          Show
          cmccabe Colin P. McCabe added a comment - So the proposal is to add a new API to DistributedFileSystem that would support NO_LOCAL_WRITE. Since we would want to continue to support all the existing options to create, it would look something like this: public FSDataOutputStream create(Path f, FsPermission permission, EnumSet<CreateFlag> flags, int bufferSize, short replication, long blockSize, Progressable progress, ChecksumOpt checksumOpt, boolean noLocalWrite) throws IOException { That's 9 different parameters, which seems like a definite code smell. Is it really worth adding this ugly API to avoid introducing a new CreateFlag ? Adding a new CreateFlag seems harmless to me. If CreateFlag.NO_LOCAL_WRITE doesn't work out, we will stick a @deprecated next to it and start ignoring it. It's still better than adding a 9-argument function which requires a typecast to use. We already have a lot of CreateFlags that are HDFS-specific such as LAZY_PERSIST , SYNC_BLOCK , and APPEND_NEWBLOCK . Also keep in mind that once we add this ugly 9-argument API to DistributedFileSystem , we won't be able to remove it. In order for HBase to use it, it has to be public and part of an official Apache release of Hadoop. By our own API policies, it becomes permanent at that point. Why is HDFS so unkind to our downstream projects?
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          I suggest HBase should do the same way of how it today is using favoredNodes.

          > ... Would we remove the create override when the NO_LOCAL_WRITE FS hint gets added? ...

          We will deprecate it first before removing it. I believe the method can stay for some time and has no urgency of being removed.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - I suggest HBase should do the same way of how it today is using favoredNodes. > ... Would we remove the create override when the NO_LOCAL_WRITE FS hint gets added? ... We will deprecate it first before removing it. I believe the method can stay for some time and has no urgency of being removed.
          Hide
          stack stack added a comment -

          I suggest HBase should do the same way of how it today is using favoredNodes.

          Thanks Tsz Wo Nicholas Sze for the response, but you did not answer the question. The question was if you thought the process of first staging a 'hidden' API is a fair burden to put on your favorite downstream project (not to mention the mess it makes inside HDFS – see note above this one for graphic detail).

          Lets back up. I think it will help us make some progress here again.

          You say:

          So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good?

          No. It does not sound good. There is no need to stage a feature as hidden first, one that is reasonable (see above discussion with the opinion of many), and has an immediate need/user. If any concern that the feature is lacking or does not work as advertised, lets do whatever proofing of the feature is needed here as part of this issue and just get it done. If the bundled tests are unsatisfactory or if you'd like me to try and report result of running this facility at scale, just say... no problem. If the implementation has a bug, lets fix in a follow-up. As we would do any other feature in HDFS.

          On your concern that a new 'hint' to the create method exposes new API, an API that by definition does not put a burden on any FS implementation that they need implement the suggested operation – i.e. the amount of API 'surface' is miniscule – it has been suggested above that we flag it @InterfaceAudience.LimitedPrivate(HBase) for a probationary period. How about we also add @InterfaceStability.Evolving on the flag so it can be yanked anytime if for some unforeseen reason, it a total mistake. Would this assuage your exposure concern Tsz Wo Nicholas Sze? Thanks for your time.

          Show
          stack stack added a comment - I suggest HBase should do the same way of how it today is using favoredNodes. Thanks Tsz Wo Nicholas Sze for the response, but you did not answer the question. The question was if you thought the process of first staging a 'hidden' API is a fair burden to put on your favorite downstream project (not to mention the mess it makes inside HDFS – see note above this one for graphic detail). Lets back up. I think it will help us make some progress here again. You say: So let add this flag later so that it allows us to test the feature and see if it is good enough or we may actually need disfavoredNodes. Sound good? No. It does not sound good. There is no need to stage a feature as hidden first, one that is reasonable (see above discussion with the opinion of many), and has an immediate need/user. If any concern that the feature is lacking or does not work as advertised, lets do whatever proofing of the feature is needed here as part of this issue and just get it done. If the bundled tests are unsatisfactory or if you'd like me to try and report result of running this facility at scale, just say... no problem. If the implementation has a bug, lets fix in a follow-up. As we would do any other feature in HDFS. On your concern that a new 'hint' to the create method exposes new API, an API that by definition does not put a burden on any FS implementation that they need implement the suggested operation – i.e. the amount of API 'surface' is miniscule – it has been suggested above that we flag it @InterfaceAudience.LimitedPrivate(HBase) for a probationary period. How about we also add @InterfaceStability.Evolving on the flag so it can be yanked anytime if for some unforeseen reason, it a total mistake. Would this assuage your exposure concern Tsz Wo Nicholas Sze ? Thanks for your time.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove it, i.e. what is the procedure to remove it? I believe we cannot simply remove it since it probably will break HBASE compilation.

          Another possible case: suppose that we find the disfavorNodes feature is very useful later on. How do we add it?

          > ..., lets do whatever proofing of the feature is needed here as part of this issue and just get it done. ...

          It seems that the "whatever proofing" is to let the community try the features for a period of time. Then, we may add it to the FileSystem API.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove it, i.e. what is the procedure to remove it? I believe we cannot simply remove it since it probably will break HBASE compilation. Another possible case: suppose that we find the disfavorNodes feature is very useful later on. How do we add it? > ..., lets do whatever proofing of the feature is needed here as part of this issue and just get it done. ... It seems that the "whatever proofing" is to let the community try the features for a period of time. Then, we may add it to the FileSystem API.
          Hide
          stack stack added a comment -

          Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove it, i.e. what is the procedure to remove it? I believe we cannot simply remove it since it probably will break HBASE compilation.

          Just remove it. HBase has loads of practice dealing with stuff being moved/removed and changed under it by HDFS.

          You could also just leave the flag in place since there is no obligation that any filesystem respect the flag. It is a suggestion only (See http://linux.die.net/man/2/open / create for the long, interesting set of flags it has)

          Another possible case: suppose that we find the disfavorNodes feature is very useful later on. How do we add it?

          Same way you'd add any feature.. and HBase would look for it the way it does now peeking for presence of extra facility with if/else hdfs, reflection, try/catches of nosuchmethod, etc. We have lots of practice doing this also. We'd keep using the NO_LOCAL_WRITE flag though, unless it purged, since it does what we want. As I understand it, disfavoredNodes would require a lot more work of hbase to get the same functionality as NO_LOCAL_WRITE provides.

          It seems that the "whatever proofing" is to let the community try the features for a period of time. Then, we may add it to the FileSystem API.

          Sorry. 'whatever proofing' is overly expansive. We are just adding a flag. I just meant, if the tests added here are not sufficient or you want some other proof it works, pre-commit, just say. No problem.

          Also, the community has been running with this 'feature' for years (See HBASE-6435) so no need of our taking the suggested disruptive 'indirection' just to add a filesystem 'hint' with attendant mess in HDFS – extra params on create – that cannot subsequently be removed.

          Thanks Tsz Wo Nicholas Sze

          What do you think of our adding the attributes LimitedPrivate and Evolving to the flag. Would that be indicator enough for you?

          Show
          stack stack added a comment - Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove it, i.e. what is the procedure to remove it? I believe we cannot simply remove it since it probably will break HBASE compilation. Just remove it. HBase has loads of practice dealing with stuff being moved/removed and changed under it by HDFS. You could also just leave the flag in place since there is no obligation that any filesystem respect the flag. It is a suggestion only (See http://linux.die.net/man/2/open / create for the long, interesting set of flags it has) Another possible case: suppose that we find the disfavorNodes feature is very useful later on. How do we add it? Same way you'd add any feature.. and HBase would look for it the way it does now peeking for presence of extra facility with if/else hdfs, reflection, try/catches of nosuchmethod, etc. We have lots of practice doing this also. We'd keep using the NO_LOCAL_WRITE flag though, unless it purged, since it does what we want. As I understand it, disfavoredNodes would require a lot more work of hbase to get the same functionality as NO_LOCAL_WRITE provides. It seems that the "whatever proofing" is to let the community try the features for a period of time. Then, we may add it to the FileSystem API. Sorry. 'whatever proofing' is overly expansive. We are just adding a flag. I just meant, if the tests added here are not sufficient or you want some other proof it works, pre-commit, just say. No problem. Also, the community has been running with this 'feature' for years (See HBASE-6435 ) so no need of our taking the suggested disruptive 'indirection' just to add a filesystem 'hint' with attendant mess in HDFS – extra params on create – that cannot subsequently be removed. Thanks Tsz Wo Nicholas Sze What do you think of our adding the attributes LimitedPrivate and Evolving to the flag. Would that be indicator enough for you?
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          BTW, we should check if excludedNodes.contains(writer) is already true; otherwise, fallback does not help.

          Fixed in the newly updated patch.

          Tsz Wo Nicholas Sze What do you think about Colin P. McCabe and stack's suggestions? Would that works for you?

          Thanks!

          Show
          eddyxu Lei (Eddy) Xu added a comment - BTW, we should check if excludedNodes.contains(writer) is already true; otherwise, fallback does not help. Fixed in the newly updated patch. Tsz Wo Nicholas Sze What do you think about Colin P. McCabe and stack 's suggestions? Would that works for you? Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 22 new or modified test files.
          0 mvndep 1m 4s Maven dependency ordering for branch
          +1 mvninstall 6m 51s trunk passed
          +1 compile 5m 49s trunk passed with JDK v1.8.0_77
          +1 compile 6m 47s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 13s trunk passed
          +1 mvnsite 2m 24s trunk passed
          +1 mvneclipse 0m 42s trunk passed
          +1 findbugs 5m 12s trunk passed
          +1 javadoc 2m 15s trunk passed with JDK v1.8.0_77
          +1 javadoc 3m 16s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 58s the patch passed
          +1 compile 5m 55s the patch passed with JDK v1.8.0_77
          +1 cc 5m 55s the patch passed
          +1 javac 5m 55s the patch passed
          +1 compile 6m 50s the patch passed with JDK v1.7.0_95
          +1 cc 6m 50s the patch passed
          +1 javac 6m 50s the patch passed
          -1 checkstyle 1m 11s root: patch generated 10 new + 675 unchanged - 7 fixed = 685 total (was 682)
          +1 mvnsite 2m 21s the patch passed
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 5m 54s the patch passed
          +1 javadoc 2m 18s the patch passed with JDK v1.8.0_77
          +1 javadoc 3m 16s the patch passed with JDK v1.7.0_95
          -1 unit 6m 44s hadoop-common in the patch failed with JDK v1.8.0_77.
          +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77.
          -1 unit 57m 42s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          +1 unit 7m 22s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 0m 59s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 54m 34s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 26s Patch does not generate ASF License warnings.
          196m 37s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.ha.TestZKFailoverController
            hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
            hadoop.hdfs.server.namenode.TestEditLog
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.shortcircuit.TestShortCircuitCache
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
            hadoop.hdfs.TestHFlush



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799341/HDFS-3702.012.patch
          JIRA Issue HDFS-3702
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 72a72221abb7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4770037
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15190/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15190/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 22 new or modified test files. 0 mvndep 1m 4s Maven dependency ordering for branch +1 mvninstall 6m 51s trunk passed +1 compile 5m 49s trunk passed with JDK v1.8.0_77 +1 compile 6m 47s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 13s trunk passed +1 mvnsite 2m 24s trunk passed +1 mvneclipse 0m 42s trunk passed +1 findbugs 5m 12s trunk passed +1 javadoc 2m 15s trunk passed with JDK v1.8.0_77 +1 javadoc 3m 16s trunk passed with JDK v1.7.0_95 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 58s the patch passed +1 compile 5m 55s the patch passed with JDK v1.8.0_77 +1 cc 5m 55s the patch passed +1 javac 5m 55s the patch passed +1 compile 6m 50s the patch passed with JDK v1.7.0_95 +1 cc 6m 50s the patch passed +1 javac 6m 50s the patch passed -1 checkstyle 1m 11s root: patch generated 10 new + 675 unchanged - 7 fixed = 685 total (was 682) +1 mvnsite 2m 21s the patch passed +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 5m 54s the patch passed +1 javadoc 2m 18s the patch passed with JDK v1.8.0_77 +1 javadoc 3m 16s the patch passed with JDK v1.7.0_95 -1 unit 6m 44s hadoop-common in the patch failed with JDK v1.8.0_77. +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77. -1 unit 57m 42s hadoop-hdfs in the patch failed with JDK v1.8.0_77. +1 unit 7m 22s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 0m 59s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 54m 34s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 26s Patch does not generate ASF License warnings. 196m 37s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.ha.TestZKFailoverController   hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead   hadoop.hdfs.server.namenode.TestEditLog   hadoop.hdfs.TestHFlush   hadoop.hdfs.shortcircuit.TestShortCircuitCache JDK v1.7.0_95 Failed junit tests hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead   hadoop.hdfs.TestHFlush Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799341/HDFS-3702.012.patch JIRA Issue HDFS-3702 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 72a72221abb7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4770037 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15190/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15190/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          All the tests failures are not related. I pass all tests locally with the exception of TestHFlush, which was reported in HDFS-2043 and thus not related.

          If there is no further objections, I will commit this by EOD. Thanks.

          Show
          eddyxu Lei (Eddy) Xu added a comment - All the tests failures are not related. I pass all tests locally with the exception of TestHFlush, which was reported in HDFS-2043 and thus not related. If there is no further objections, I will commit this by EOD. Thanks.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          After a second thought, I agree that it is fine to add CreateFlag.NO_LOCAL_WRITE as LimitedPrivate to HBase. Thanks.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - After a second thought, I agree that it is fine to add CreateFlag.NO_LOCAL_WRITE as LimitedPrivate to HBase. Thanks.
          Hide
          eddyxu Lei (Eddy) Xu added a comment - - edited

          Committed to trunk and branch-2.

          Thanks a lot for the detailed suggestions and kindly reviews from Andrew Wang, Nicolas Liochon, stack, Colin P. McCabe, Arpit Agarwal and Tsz Wo Nicholas Sze!

          Show
          eddyxu Lei (Eddy) Xu added a comment - - edited Committed to trunk and branch-2. Thanks a lot for the detailed suggestions and kindly reviews from Andrew Wang , Nicolas Liochon , stack , Colin P. McCabe , Arpit Agarwal and Tsz Wo Nicholas Sze !
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9685 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9685/)
          HDFS-3702. Add an option for NOT writing the blocks locally if there is (lei: rev 0a152103f19a3e8e1b7f33aeb9dd115ba231d7b7)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AddBlockFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockPlacementPolicyRackFaultTolerant.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestAvailableSpaceBlockPlacementPolicy.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestUpgradeDomainBlockPlacementPolicy.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BaseReplicationPolicyTest.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9685 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9685/ ) HDFS-3702 . Add an option for NOT writing the blocks locally if there is (lei: rev 0a152103f19a3e8e1b7f33aeb9dd115ba231d7b7) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AddBlockFlag.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockPlacementPolicyRackFaultTolerant.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestAvailableSpaceBlockPlacementPolicy.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestUpgradeDomainBlockPlacementPolicy.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BaseReplicationPolicyTest.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithUpgradeDomain.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9686 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9686/)
          HDFS-3702. Fix missing imports from HDFS-3702 trunk patch. (lei: rev 8bd0bca0b1ea524132f564b3b8332506421f64b9)

          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9686 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9686/ ) HDFS-3702 . Fix missing imports from HDFS-3702 trunk patch. (lei: rev 8bd0bca0b1ea524132f564b3b8332506421f64b9) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          Hide
          stack stack added a comment -

          Any chance of getting this on 2.8 branch? Thanks.

          Show
          stack stack added a comment - Any chance of getting this on 2.8 branch? Thanks.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Thanks, stack. I backported it into branch 2.8.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Thanks, stack . I backported it into branch 2.8.

            People

            • Assignee:
              eddyxu Lei (Eddy) Xu
              Reporter:
              nkeywal Nicolas Liochon
            • Votes:
              0 Vote for this issue
              Watchers:
              29 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development