Details

    • Type: New Feature
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.7.0
    • Component/s: datanode, namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      1. Introduced quota by storage type as a hard limit on the amount of space usage allowed for different storage types (SSD, DISK, ARCHIVE) under the target directory.
      2. Added {{SetQuotaByStorageType}} API and {{-storagetype}} option for {{hdfs dfsadmin -setSpaceQuota/-clrSpaceQuota}} commands to allow set/clear quota by storage type under the target directory.
      Show
      1. Introduced quota by storage type as a hard limit on the amount of space usage allowed for different storage types (SSD, DISK, ARCHIVE) under the target directory. 2. Added {{SetQuotaByStorageType}} API and {{-storagetype}} option for {{hdfs dfsadmin -setSpaceQuota/-clrSpaceQuota}} commands to allow set/clear quota by storage type under the target directory.

      Description

      Phase II of the Heterogeneous storage features have completed by HDFS-6584. This JIRA is opened to enable Quota support of different storage types in terms of storage space usage. This is more important for certain storage types such as SSD as it is precious and more performant.

      As described in the design doc of HDFS-5682, we plan to add new quotaByStorageType command and new name node RPC protocol for it. The quota by storage type feature is applied to HDFS directory level similar to traditional HDFS space quota.

      1. editsStored
        6 kB
        Xiaoyu Yao
      2. HDFS-7584.0.patch
        231 kB
        Xiaoyu Yao
      3. HDFS-7584.1.patch
        263 kB
        Xiaoyu Yao
      4. HDFS-7584.2.patch
        264 kB
        Xiaoyu Yao
      5. HDFS-7584.3.patch
        270 kB
        Xiaoyu Yao
      6. HDFS-7584.4.patch
        273 kB
        Xiaoyu Yao
      7. HDFS-7584.5.patch
        247 kB
        Xiaoyu Yao
      8. HDFS-7584.6.patch
        278 kB
        Xiaoyu Yao
      9. HDFS-7584.7.patch
        277 kB
        Xiaoyu Yao
      10. HDFS-7584.8.patch
        277 kB
        Xiaoyu Yao
      11. HDFS-7584.9.patch
        272 kB
        Xiaoyu Yao
      12. HDFS-7584.9a.patch
        273 kB
        Xiaoyu Yao
      13. HDFS-7584.9b.patch
        273 kB
        Xiaoyu Yao
      14. HDFS-7584.9c.patch
        273 kB
        Xiaoyu Yao
      15. HDFS-7584 Quota by Storage Type - 01202015.pdf
        133 kB
        Xiaoyu Yao

        Issue Links

          Activity

          Hide
          zhz Zhe Zhang added a comment -

          This is a very useful feature and thanks Xiaoyu Yao for initiating the work. A few comments:

          1. For example, when SSD is not available but Quota of SSD is available, the write will fallback to DISK and the storage usage deducted from both SSD and the cumulative disk space quota of the directory even no SSD space is being consumed.

            This is an interesting scenario and is worth more discussion. It is a conservative and safe policy to deduct from both SSD and DISK quotas. However it doesn't fully comply with the principle of quota based on intended usage, which might make it appear counter-intuitive to users (e.g. why am I double charged?). As an extreme example, what if the user doesn't have any DISK quota?

          2. How about calculating quota truly based on intended usage? The charged quota might be different than the usage, but so is the case with existing quota logic. What are other disadvantages?
          3. If we do want to charge by actual usage (5.2), maybe we should allow different "quota currencies" to be exchanged? Something like 1GB of SSD = 2GB of DISK = 4GB of ARCHIVAL. Or at least allow a user with only 1GB SSD quota to use 1GB DISK space.
          Show
          zhz Zhe Zhang added a comment - This is a very useful feature and thanks Xiaoyu Yao for initiating the work. A few comments: For example, when SSD is not available but Quota of SSD is available, the write will fallback to DISK and the storage usage deducted from both SSD and the cumulative disk space quota of the directory even no SSD space is being consumed. This is an interesting scenario and is worth more discussion. It is a conservative and safe policy to deduct from both SSD and DISK quotas. However it doesn't fully comply with the principle of quota based on intended usage , which might make it appear counter-intuitive to users (e.g. why am I double charged?). As an extreme example, what if the user doesn't have any DISK quota? How about calculating quota truly based on intended usage? The charged quota might be different than the usage, but so is the case with existing quota logic. What are other disadvantages? If we do want to charge by actual usage (5.2), maybe we should allow different "quota currencies" to be exchanged? Something like 1GB of SSD = 2GB of DISK = 4GB of ARCHIVAL. Or at least allow a user with only 1GB SSD quota to use 1GB DISK space.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Zhe Zhang for the feedback.

          1. This is an interesting scenario and is worth more discussion. It is a conservative and safe policy to deduct from both SSD and DISK quotas. However it doesn't fully comply with the principle of quota based on intended usage, which might make it appear counter-intuitive to users (e.g. why am I double charged?). As an extreme example, what if the user doesn't have any DISK quota?

          Agree. "the cumulative disk space quota" in the spec means traditional space quota not quota of DISK type. I will update it to avoid confusion.
          We are actually calculating quota based on intended usage by not deducting DISK quotas due to policy fallback.

          For example, we have a directory \ssd1 with ONE_SSD policy enabled. The directory currently has 2 blocks of SSD Quota and 3 blocks of DISK Quota remaining.
          If we want to create a file of 1 block with a replication factor of 3 under \ssd1 but the actual available SSD storage is 0, the creation fallback to DISK.
          After that, the remaining SSD and DISK Quota should be 1 block and 1 block instead of 2 block and 0 block, respectively.

          If the fallback still can't be satisfied due to DISK quota unavailable, the user will get QuotaByStorageTypeExceeded exception.

          2. How about calculating quota truly based on intended usage? The charged quota might be different than the usage, but so is the case with existing quota logic. What are other disadvantages?

          Quota calculation based on intended usage saves the replication monitor from updating traditional space/namespace quota usage for under/over replicated blocks. For quota by storage type, it can similarly save the Mover from updating quota when the blocks are moved across storage tiers to meet the policy requirement.

          3. If we do want to charge by actual usage (5.2), maybe we should allow different "quota currencies" to be exchanged? Something like 1GB of SSD = 2GB of DISK = 4GB of ARCHIVAL. Or at least allow a user with only 1GB SSD quota to use 1GB DISK space.

          As mentioned above, we prefer charging by intended usage for its simplicity and consistency. Correlation quota of different storage types looks interesting, it may requires additional tuning to get appropriate currency rates between storage types for different user scenarios.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Zhe Zhang for the feedback. 1. This is an interesting scenario and is worth more discussion. It is a conservative and safe policy to deduct from both SSD and DISK quotas. However it doesn't fully comply with the principle of quota based on intended usage, which might make it appear counter-intuitive to users (e.g. why am I double charged?). As an extreme example, what if the user doesn't have any DISK quota? Agree. "the cumulative disk space quota" in the spec means traditional space quota not quota of DISK type. I will update it to avoid confusion. We are actually calculating quota based on intended usage by not deducting DISK quotas due to policy fallback. For example, we have a directory \ssd1 with ONE_SSD policy enabled. The directory currently has 2 blocks of SSD Quota and 3 blocks of DISK Quota remaining. If we want to create a file of 1 block with a replication factor of 3 under \ssd1 but the actual available SSD storage is 0, the creation fallback to DISK. After that, the remaining SSD and DISK Quota should be 1 block and 1 block instead of 2 block and 0 block, respectively. If the fallback still can't be satisfied due to DISK quota unavailable, the user will get QuotaByStorageTypeExceeded exception. 2. How about calculating quota truly based on intended usage? The charged quota might be different than the usage, but so is the case with existing quota logic. What are other disadvantages? Quota calculation based on intended usage saves the replication monitor from updating traditional space/namespace quota usage for under/over replicated blocks. For quota by storage type, it can similarly save the Mover from updating quota when the blocks are moved across storage tiers to meet the policy requirement. 3. If we do want to charge by actual usage (5.2), maybe we should allow different "quota currencies" to be exchanged? Something like 1GB of SSD = 2GB of DISK = 4GB of ARCHIVAL. Or at least allow a user with only 1GB SSD quota to use 1GB DISK space. As mentioned above, we prefer charging by intended usage for its simplicity and consistency. Correlation quota of different storage types looks interesting, it may requires additional tuning to get appropriate currency rates between storage types for different user scenarios.
          Hide
          zhz Zhe Zhang added a comment -

          "the cumulative disk space quota" in the spec means traditional space quota not quota of DISK type

          Thanks for clarifying. Then section 5.1 is a pure intended-usage based policy, and makes more sense now. But then how should we coordinate type-based quotas and the "traditional space quota"? Is the latter the summation of quotas from all types? If I set my SSD quota as 1GB and DISK quota as 2GB, do I need to separately set my overall space quota to be 3GB?

          If the fallback still can't be satisfied due to DISK quota unavailable, the user will get QuotaByStorageTypeExceeded exception.

          This is where it would be helpful to exchange type quotas. Revising the example a little bit, let's assume I have 3 blocks of SSD quota and no DISK quota, and the cluster has no SSD storage and plenty of DISK storage. When trying to create a 1 block file with replication factor of 3, under ONE_SSD policy, the user would expect a success because he has 3 blocks of quota in the most expensive type. I understand setting "exchange rates" needs more discussion. Can we at least allow more expensive quotas to be used on less expensive storages?

          Show
          zhz Zhe Zhang added a comment - "the cumulative disk space quota" in the spec means traditional space quota not quota of DISK type Thanks for clarifying. Then section 5.1 is a pure intended-usage based policy, and makes more sense now. But then how should we coordinate type-based quotas and the "traditional space quota"? Is the latter the summation of quotas from all types? If I set my SSD quota as 1GB and DISK quota as 2GB, do I need to separately set my overall space quota to be 3GB? If the fallback still can't be satisfied due to DISK quota unavailable, the user will get QuotaByStorageTypeExceeded exception. This is where it would be helpful to exchange type quotas. Revising the example a little bit, let's assume I have 3 blocks of SSD quota and no DISK quota, and the cluster has no SSD storage and plenty of DISK storage. When trying to create a 1 block file with replication factor of 3, under ONE_SSD policy, the user would expect a success because he has 3 blocks of quota in the most expensive type. I understand setting "exchange rates" needs more discussion. Can we at least allow more expensive quotas to be used on less expensive storages?
          Hide
          xyao Xiaoyu Yao added a comment -

          Attach a preliminary patch to demo the proposal. I will open separate JIRAs for

          1. Add option to display quota by storage type info in hadoop shell command "hadoop fs -count -q", this requires StorageType.java being moved from hadoop-hdfs to hadoop-common.
          2. Update the editsStored for TestOfflineEditsViewer for new SetQuotaByStorageType op.
          3. Adding more unit tests for quota by storage type, more specifically on snapshot and checkpoint.
          Show
          xyao Xiaoyu Yao added a comment - Attach a preliminary patch to demo the proposal. I will open separate JIRAs for Add option to display quota by storage type info in hadoop shell command "hadoop fs -count -q", this requires StorageType.java being moved from hadoop-hdfs to hadoop-common. Update the editsStored for TestOfflineEditsViewer for new SetQuotaByStorageType op. Adding more unit tests for quota by storage type, more specifically on snapshot and checkpoint.
          Hide
          xyao Xiaoyu Yao added a comment -

          But then how should we coordinate type-based quotas and the "traditional space quota"? Is the latter the summation of quotas from all types? If I set my SSD quota as 1GB and DISK quota as 2GB, do I need to separately set my overall space quota to be 3GB?

          We propose not to correlate storage type quota with legacy space quota. Legacy space quota lacks fine-grain control of storage usage and is left as-is for backward compatibility. If you have 1GB SSD quota dn 2GB DISK quota, which will limit your maximum overall space usage at 3GB of space quota. You don't need to set legacy quota in this case.

          This is where it would be helpful to exchange type quotas. Revising the example a little bit, let's assume I have 3 blocks of SSD quota and no DISK quota, and the cluster has no SSD storage and plenty of DISK storage. When trying to create a 1 block file with replication factor of 3, under ONE_SSD policy, the user would expect a success because he has 3 blocks of quota in the most expensive type. I understand setting "exchange rates" needs more discussion. Can we at least allow more expensive quotas to be used on less expensive storages?

          The example above looks like a configuration that needs admin to either add more SSD storage or increase the DISK quota. By default, quota should be simple and strict. The exchange rate actually could be a guideline for admin setup quotas of different storage types. If automatic correlation for quota of different storage types is desired, it should be explicitly set though policies.

          Show
          xyao Xiaoyu Yao added a comment - But then how should we coordinate type-based quotas and the "traditional space quota"? Is the latter the summation of quotas from all types? If I set my SSD quota as 1GB and DISK quota as 2GB, do I need to separately set my overall space quota to be 3GB? We propose not to correlate storage type quota with legacy space quota. Legacy space quota lacks fine-grain control of storage usage and is left as-is for backward compatibility. If you have 1GB SSD quota dn 2GB DISK quota, which will limit your maximum overall space usage at 3GB of space quota. You don't need to set legacy quota in this case. This is where it would be helpful to exchange type quotas. Revising the example a little bit, let's assume I have 3 blocks of SSD quota and no DISK quota, and the cluster has no SSD storage and plenty of DISK storage. When trying to create a 1 block file with replication factor of 3, under ONE_SSD policy, the user would expect a success because he has 3 blocks of quota in the most expensive type. I understand setting "exchange rates" needs more discussion. Can we at least allow more expensive quotas to be used on less expensive storages? The example above looks like a configuration that needs admin to either add more SSD storage or increase the DISK quota. By default, quota should be simple and strict. The exchange rate actually could be a guideline for admin setup quotas of different storage types. If automatic correlation for quota of different storage types is desired, it should be explicitly set though policies.
          Hide
          xyao Xiaoyu Yao added a comment -

          Update patch with the editsStored and some minor fixes.

          Show
          xyao Xiaoyu Yao added a comment - Update patch with the editsStored and some minor fixes.
          Hide
          zhz Zhe Zhang added a comment -

          If you have 1GB SSD quota dn 2GB DISK quota, which will limit your maximum overall space usage at 3GB of space quota. You don't need to set legacy quota in this case.

          This makes sense. But what happens if the user did configure a legacy quota, say 2GB in this case? Will it be enforced together with the 1GB SSD + 2GB DISK quotas?

          The exchange rate actually could be a guideline for admin setup quotas of different storage types. If automatic correlation for quota of different storage types is desired, it should be explicitly set though policies.

          Sounds good to me. We can revisit it as a follow up.

          Show
          zhz Zhe Zhang added a comment - If you have 1GB SSD quota dn 2GB DISK quota, which will limit your maximum overall space usage at 3GB of space quota. You don't need to set legacy quota in this case. This makes sense. But what happens if the user did configure a legacy quota, say 2GB in this case? Will it be enforced together with the 1GB SSD + 2GB DISK quotas? The exchange rate actually could be a guideline for admin setup quotas of different storage types. If automatic correlation for quota of different storage types is desired, it should be explicitly set though policies. Sounds good to me. We can revisit it as a follow up.
          Hide
          xyao Xiaoyu Yao added a comment -

          But what happens if the user did configure a legacy quota, say 2GB in this case? Will it be enforced together with the 1GB SSD + 2GB DISK quotas?

          Good question. Yes. they will be enforced together. When both are configured, whatever usage exceeds first will throw a QuotaExceeded exception.

          Show
          xyao Xiaoyu Yao added a comment - But what happens if the user did configure a legacy quota, say 2GB in this case? Will it be enforced together with the 1GB SSD + 2GB DISK quotas? Good question. Yes. they will be enforced together. When both are configured, whatever usage exceeds first will throw a QuotaExceeded exception.
          Hide
          xyao Xiaoyu Yao added a comment -

          Include the editsStored binary in the patch.

          Show
          xyao Xiaoyu Yao added a comment - Include the editsStored binary in the patch.
          Hide
          xyao Xiaoyu Yao added a comment -

          Update patch with refactoring and setReplication handling.

          Show
          xyao Xiaoyu Yao added a comment - Update patch with refactoring and setReplication handling.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12694377/HDFS-7584.4.patch
          against trunk revision 8f26d5a.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9323//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694377/HDFS-7584.4.patch against trunk revision 8f26d5a. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9323//console This message is automatically generated.
          Hide
          zhz Zhe Zhang added a comment -

          Xiaoyu Yao Will the legacy quota become deprecated after this change? If so we should mention it in the documentation. Otherwise we should also add some guideline on how to set both legacy and type-aware quotas. The above example, where the supposedly overall quora is smaller than aggregate type quotas, isn't so easy to understand.

          Otherwise the patch looks good to me. Thanks again for the good work!

          Show
          zhz Zhe Zhang added a comment - Xiaoyu Yao Will the legacy quota become deprecated after this change? If so we should mention it in the documentation. Otherwise we should also add some guideline on how to set both legacy and type-aware quotas. The above example, where the supposedly overall quora is smaller than aggregate type quotas, isn't so easy to understand. Otherwise the patch looks good to me. Thanks again for the good work!
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Xiaoyu Yao, thanks for picking up this non-trivial task. Here is some initial feedback on the patch:

          1. StorageType: You don't need a new constructor parameter for supportQuotaByStorageType. You can just use !isTransient. We can add the flexibility in the future if needed.
          2. StorageType.isStorageTypeChosen can be removed. It looks out of place in its public interface.
          3. StorageType.getQuotableTypes is oddly named. Perhaps typesSupportingQuota is better?
          4. You don't need the StorageTypeProto.NONE or the default value for it in SetQuotaRequestProto. Since it is an optional field we can check for its presence with req.hasStorageType in ClientNamenodeProtocolPB.setQuota.
          5. Similarly ClientNamenodeProtocolTranslatorPB.setQuota should not invoke setStorageType if the type parameter is null.
          6. The loop in StorageType.parseStorageType(String) can be replaced with StorageType.valueOf(s.toUpperCase()) right?
          7. The loop in StorageType.parseStorageType(int) is not necessary. You can just use StorageType.values()[i] instead. StorageType.values[] can be cached in a static context to avoid allocating it repeatedly. The 'return null' fallback will be lost and we will bubble up an exception instead. I checked the callers and think that will be fine.

          More feedback to follow.

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Xiaoyu Yao , thanks for picking up this non-trivial task. Here is some initial feedback on the patch: StorageType: You don't need a new constructor parameter for supportQuotaByStorageType . You can just use !isTransient . We can add the flexibility in the future if needed. StorageType.isStorageTypeChosen can be removed. It looks out of place in its public interface. StorageType.getQuotableTypes is oddly named. Perhaps typesSupportingQuota is better? You don't need the StorageTypeProto.NONE or the default value for it in SetQuotaRequestProto . Since it is an optional field we can check for its presence with req.hasStorageType in ClientNamenodeProtocolPB.setQuota . Similarly ClientNamenodeProtocolTranslatorPB.setQuota should not invoke setStorageType if the type parameter is null. The loop in StorageType.parseStorageType(String) can be replaced with StorageType.valueOf(s.toUpperCase()) right? The loop in StorageType.parseStorageType(int) is not necessary. You can just use StorageType.values() [i] instead. StorageType.values[] can be cached in a static context to avoid allocating it repeatedly. The 'return null' fallback will be lost and we will bubble up an exception instead. I checked the callers and think that will be fine. More feedback to follow.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Few more comments to wrap up feedback on API and protocol.

          1. Continuing from previous, you also don't need the null check in PBHelper.convertStorageType.
          2. DFSClient.java:3064 - Fix formatting.
          3. Just an observation, and you don't really need to fix it. I realize you are trying to avoid a new RPC call by modifying the existing ClientProtocol.setQuota call. But it does create a confusing difference between that and DFSClient.setQuota which has two overloads and the matching overload of DFSClient.setQuota behaves differently (throws exception on null). Perhaps it is better to add a new ClientProtocol.setQuota RPC call. Either is fine though.
          4. Do we need the new config key DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY? The administrator can already choose to avoid configuring per-type quotas so I am not sure the new configuration is useful.
          5. DistributedFileSystem.setQuotaByStorageType - the Javadoc and one or more [Storage Type, space Quota] pairs does not match the signature.

          Looking at the NN changes next.

          Show
          arpitagarwal Arpit Agarwal added a comment - Few more comments to wrap up feedback on API and protocol. Continuing from previous, you also don't need the null check in PBHelper.convertStorageType . DFSClient.java:3064 - Fix formatting. Just an observation, and you don't really need to fix it. I realize you are trying to avoid a new RPC call by modifying the existing ClientProtocol.setQuota call. But it does create a confusing difference between that and DFSClient.setQuota which has two overloads and the matching overload of DFSClient.setQuota behaves differently (throws exception on null). Perhaps it is better to add a new ClientProtocol.setQuota RPC call. Either is fine though. Do we need the new config key DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY ? The administrator can already choose to avoid configuring per-type quotas so I am not sure the new configuration is useful. DistributedFileSystem.setQuotaByStorageType - the Javadoc and one or more [Storage Type, space Quota] pairs does not match the signature. Looking at the NN changes next.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Xiaoyu Yao, considering the size of the patch do you think we should split it into at least two smaller patches. I can think of at least one natural split:

          1. Part 1 is the the API, protocol and tool changes.
          2. Part 2 is the NameNode implementation.

          In any case I started looking at the NN changes. Some initial feedback below, mostly nitpicks. I will look at the rest later this week:

          1. I did not understand the todo in INode.java:464. If it is something that would be broken and is not too hard to fix perhaps we should include it in the same checkin? This is perhaps another argument for splitting into two patches at a higher level.
          2. QuotaCounts: It has four telescoping constructors, all private. It is a little confusing. Can we simplify the constructors? e.g. the default constructor can be replaced with initializers.
          3. QuotaCounts: typeSpaces and typeCounts are used interchangeably. We should probably name them consistently.
          4. NameNodeLayoutVersion: description of the new layout is too terse, probably unintentional.
          5. Could you please add a short comment for ONE_NAMESPACE? I realize it was even more confusing before, thanks for adding the static initializer.
          6. INode.getQuotaCounts - don't need local variable qc.
          7. Nitpick, optional: EnumCounters.allLessOrEqual and .anyGreatOrEqual - can we use foreach loop?
          8. DFSAdmin.java: The space quota is set onto storage type should be The storage type specific quota is set when ...
          9. Unintentional whitespace changes in Quota.java?
          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Xiaoyu Yao , considering the size of the patch do you think we should split it into at least two smaller patches. I can think of at least one natural split: Part 1 is the the API, protocol and tool changes. Part 2 is the NameNode implementation. In any case I started looking at the NN changes. Some initial feedback below, mostly nitpicks. I will look at the rest later this week: I did not understand the todo in INode.java:464. If it is something that would be broken and is not too hard to fix perhaps we should include it in the same checkin? This is perhaps another argument for splitting into two patches at a higher level. QuotaCounts: It has four telescoping constructors, all private. It is a little confusing. Can we simplify the constructors? e.g. the default constructor can be replaced with initializers. QuotaCounts: typeSpaces and typeCounts are used interchangeably. We should probably name them consistently. NameNodeLayoutVersion: description of the new layout is too terse, probably unintentional. Could you please add a short comment for ONE_NAMESPACE ? I realize it was even more confusing before, thanks for adding the static initializer. INode.getQuotaCounts - don't need local variable qc . Nitpick, optional: EnumCounters.allLessOrEqual and .anyGreatOrEqual - can we use foreach loop? DFSAdmin.java: The space quota is set onto storage type should be The storage type specific quota is set when ... Unintentional whitespace changes in Quota.java?
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Arpit for the review and feedback. I will provide a new patch with the update soon.

          I can think of at least one natural split:

          Part 1 is the the API, protocol and tool changes.
          Part 2 is the NameNode implementation.

          That sounds good to me. I will try splitting it up after finishing the first round of review.

          QuotaCounts: It has four telescoping constructors, all private. It is a little confusing. Can we simplify the constructors? e.g. the default constructor can be replaced with initializers.

          Agree. That's something I plan to refactor as well. I tried the build pattern, which could help to main this class when we need add more counters in future that need to be initialized.

          I did not understand the todo in INode.java:464. If it is something that would be broken and is not too hard to fix perhaps we should include it in the same checkin? This is perhaps another argument for splitting into two patches at a higher level.

          This is for "hadoop fs -count -q" command where the majority of the change will not be under hadoop-hdfs-project. It also needs to move the StorageType.java from hadoop-hdfs to hadoop-common, which I prefer to change after the initial check in.

          QuotaCounts: typeSpaces and typeCounts are used interchangeably. We should probably name them consistently.

          The parameter in the methods use typeSpaces to be consistent with the other parameters (namespace, diskspace). The class member variable is named typeCounts.

          Nitpick, optional: EnumCounters.allLessOrEqual and .anyGreatOrEqual - can we use foreach loop?

          Agree that foreach will make it syntactically neat.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Arpit for the review and feedback. I will provide a new patch with the update soon. I can think of at least one natural split: Part 1 is the the API, protocol and tool changes. Part 2 is the NameNode implementation. That sounds good to me. I will try splitting it up after finishing the first round of review. QuotaCounts: It has four telescoping constructors, all private. It is a little confusing. Can we simplify the constructors? e.g. the default constructor can be replaced with initializers. Agree. That's something I plan to refactor as well. I tried the build pattern, which could help to main this class when we need add more counters in future that need to be initialized. I did not understand the todo in INode.java:464. If it is something that would be broken and is not too hard to fix perhaps we should include it in the same checkin? This is perhaps another argument for splitting into two patches at a higher level. This is for "hadoop fs -count -q" command where the majority of the change will not be under hadoop-hdfs-project. It also needs to move the StorageType.java from hadoop-hdfs to hadoop-common, which I prefer to change after the initial check in. QuotaCounts: typeSpaces and typeCounts are used interchangeably. We should probably name them consistently. The parameter in the methods use typeSpaces to be consistent with the other parameters (namespace, diskspace). The class member variable is named typeCounts. Nitpick, optional: EnumCounters.allLessOrEqual and .anyGreatOrEqual - can we use foreach loop? Agree that foreach will make it syntactically neat.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Arpit again for the feedback. I've fixed your comments 1,2,5.

          Just an observation, and you don't really need to fix it. I realize you are trying to avoid a new RPC call by modifying the existing ClientProtocol.setQuota call. But it does create a confusing difference between that and DFSClient.setQuota which has two overloads and the matching overload of DFSClient.setQuota behaves differently (throws exception on null). Perhaps it is better to add a new ClientProtocol.setQuota RPC call. Either is fine though.

          The decision to reuse existing setQuota RPC call compatibly instead of adding a new one is based on the feedbacks from design review. The API that throws exception on null storage type is the new API DFSClient#setQuotaByStorageType with a different signature. We keep the original DFSClient#setQuota(dsQuota, nsQuota) as-is so that it won't break existing clients. With a single setQuota RPC message, I think it should be fine.

          Do we need the new config key DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY? The administrator can already choose to avoid configuring per-type quotas so I am not sure the new configuration is useful.

          Adding the key allow us to completely disable the feature. Without the key, the admin can accidentally configure and enable this feature. I can remove it if this is not needed.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Arpit again for the feedback. I've fixed your comments 1,2,5. Just an observation, and you don't really need to fix it. I realize you are trying to avoid a new RPC call by modifying the existing ClientProtocol.setQuota call. But it does create a confusing difference between that and DFSClient.setQuota which has two overloads and the matching overload of DFSClient.setQuota behaves differently (throws exception on null). Perhaps it is better to add a new ClientProtocol.setQuota RPC call. Either is fine though. The decision to reuse existing setQuota RPC call compatibly instead of adding a new one is based on the feedbacks from design review. The API that throws exception on null storage type is the new API DFSClient#setQuotaByStorageType with a different signature. We keep the original DFSClient#setQuota(dsQuota, nsQuota) as-is so that it won't break existing clients. With a single setQuota RPC message, I think it should be fine. Do we need the new config key DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY? The administrator can already choose to avoid configuring per-type quotas so I am not sure the new configuration is useful. Adding the key allow us to completely disable the feature. Without the key, the admin can accidentally configure and enable this feature. I can remove it if this is not needed.
          Hide
          xyao Xiaoyu Yao added a comment -

          Rebase to match with the trunk and address the review feedbacks.

          Show
          xyao Xiaoyu Yao added a comment - Rebase to match with the trunk and address the review feedbacks.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695054/HDFS-7584.5.patch
          against trunk revision 9850e15.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 8 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9359//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695054/HDFS-7584.5.patch against trunk revision 9850e15. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9359//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Update the patch with the missing new file.

          Show
          xyao Xiaoyu Yao added a comment - Update the patch with the missing new file.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695086/HDFS-7584.6.patch
          against trunk revision caf7298.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9360//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695086/HDFS-7584.6.patch against trunk revision caf7298. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9360//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695090/HDFS-7584.7.patch
          against trunk revision caf7298.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9361//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695090/HDFS-7584.7.patch against trunk revision caf7298. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9361//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695191/HDFS-7584.8.patch
          against trunk revision 5a0051f.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9369//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695191/HDFS-7584.8.patch against trunk revision 5a0051f. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9369//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Remove the editsStored binary diff from the patch. Otherwise, the patch V8 is the same as patch V9.

          Show
          xyao Xiaoyu Yao added a comment - Remove the editsStored binary diff from the patch. Otherwise, the patch V8 is the same as patch V9.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695195/HDFS-7584.9.patch
          against trunk revision 7882bc0.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified test files.

          -1 javac. The applied patch generated 1193 javac compiler warnings (more than the trunk's current 1191 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//artifact/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695195/HDFS-7584.9.patch against trunk revision 7882bc0. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. -1 javac . The applied patch generated 1193 javac compiler warnings (more than the trunk's current 1191 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 3 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//artifact/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9370//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer needs the new binary editsStored in V8, I will upload a separate editsStored file.
          org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream needs to bump up the number of editOps from 49 to 50.

          I will post a new patch that fixes core tests and javac/findbugs shortly.

          Show
          xyao Xiaoyu Yao added a comment - org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer needs the new binary editsStored in V8, I will upload a separate editsStored file. org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream needs to bump up the number of editOps from 49 to 50. I will post a new patch that fixes core tests and javac/findbugs shortly.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695300/editsStored
          against trunk revision fe2188a.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9372//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695300/editsStored against trunk revision fe2188a. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9372//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Reorder the patch name for Jenkins to pick up the latest.

          Show
          xyao Xiaoyu Yao added a comment - Reorder the patch name for Jenkins to pick up the latest.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695305/HDFS-7584.9a.patch
          against trunk revision 57b8950.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDecommission
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//artifact/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695305/HDFS-7584.9a.patch against trunk revision 57b8950. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 3 new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDecommission org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//artifact/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9373//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Fix the findbugs issue.

          Show
          xyao Xiaoyu Yao added a comment - Fix the findbugs issue.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695373/HDFS-7584.9b.patch
          against trunk revision ad55083.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//artifact/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695373/HDFS-7584.9b.patch against trunk revision ad55083. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestBlockScanner org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//artifact/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9375//console This message is automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          refactor addDirectoryWithQuotaFeature

          Show
          xyao Xiaoyu Yao added a comment - refactor addDirectoryWithQuotaFeature
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12695445/HDFS-7584.9c.patch
          against trunk revision f2c9109.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
          org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//artifact/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12695445/HDFS-7584.9c.patch against trunk revision f2c9109. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot org.apache.hadoop.hdfs.server.datanode.TestBlockScanner org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//artifact/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9378//console This message is automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Resolving as the bulk of the feature has been merged to branch-2.

          The remaining work is more unit tests, updated tooling, documentation and minor post-merge refactoring.

          Thanks for the significant contribution Xiaoyu Yao ! Could you please update this Jira with short release note?

          Show
          arpitagarwal Arpit Agarwal added a comment - Resolving as the bulk of the feature has been merged to branch-2. The remaining work is more unit tests, updated tooling, documentation and minor post-merge refactoring. Thanks for the significant contribution Xiaoyu Yao ! Could you please update this Jira with short release note?
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7092 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7092/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7092 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7092/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks all for the review and feedbacks. I will update the release note for this feature under HDFS-7584.

          Show
          xyao Xiaoyu Yao added a comment - Thanks all for the review and feedbacks. I will update the release note for this feature under HDFS-7584 .
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks all for the review and feedbacks. I will update the release note for this feature under HDFS-7584.

          Show
          xyao Xiaoyu Yao added a comment - Thanks all for the review and feedbacks. I will update the release note for this feature under HDFS-7584 .
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #103 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/103/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #103 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/103/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #837 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/837/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #837 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/837/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #2035 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2035/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2035 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2035/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #104 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/104/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #104 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/104/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2054 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2054/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2054 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2054/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/)
          HDFS-7584. Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/ ) HDFS-7584 . Update CHANGES.txt (arp: rev 9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

            People

            • Assignee:
              xyao Xiaoyu Yao
              Reporter:
              xyao Xiaoyu Yao
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development