Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1751

Intrinsic limits for HDFS files, directories

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.23.0
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Enforce a configurable limit on:
      the length of a path component
      the number of names in a directory

      The intention is to prevent a too-long name or a too-full directory. This is not about RPC buffers, the length of command lines, etc. There may be good reasons for those kinds of limits, but that is not the intended scope of this feature. Consequently, a reasonable implementation might be to extend the existing quota checker so that it faults the creation of a name that violates the limits. This strategy of faulting new creation evades the problem of existing names or directories that violate the limits.

      1. HDFS-1751-6.patch
        16 kB
        Daryn Sharp
      2. HDFS-1751-5.patch
        16 kB
        Daryn Sharp
      3. HDFS-1751-4.patch
        16 kB
        Daryn Sharp
      4. HDFS-1751-3.patch
        16 kB
        Daryn Sharp
      5. HDFS-1751-2.patch
        21 kB
        Daryn Sharp
      6. HDFS-1751.patch
        20 kB
        Daryn Sharp

        Issue Links

          Activity

          Hide
          Daryn Sharp added a comment -

          Add min/max component length, max items per directory. Full set of unit tests.

          Show
          Daryn Sharp added a comment - Add min/max component length, max items per directory. Full set of unit tests.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12473571/HDFS-1751.patch
          against trunk revision 1080836.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12473571/HDFS-1751.patch against trunk revision 1080836. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestFileConcurrentReader -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/257//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          The following tests that failed are unrelated to this change, and have been failing for at least a week.

          org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer
          org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit
          org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

          The "TestFileConcurrentReader" issue is caused by the build server running out of file descriptors, and appears to be another long standing failure.

          Please approve since the unit test failures appear unrelated.

          Show
          Daryn Sharp added a comment - The following tests that failed are unrelated to this change, and have been failing for at least a week. org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified The "TestFileConcurrentReader" issue is caused by the build server running out of file descriptors, and appears to be another long standing failure. Please approve since the unit test failures appear unrelated.
          Hide
          Boris Shkolnik added a comment -

          quick question - how 'limit on the number of names in a directory' is defferent from directory name quota?

          Show
          Boris Shkolnik added a comment - quick question - how 'limit on the number of names in a directory' is defferent from directory name quota?
          Hide
          Daryn Sharp added a comment -

          As far as I can tell, there is an existing quota for all the items in the entire tree beneath a directory, but this is a quota for the number of allowed items in each specific directory in the filesystem. Ie. you may have a current quota of 15 items for a directory tree, but the new quota only allows each directory to have 5 items.

          Show
          Daryn Sharp added a comment - As far as I can tell, there is an existing quota for all the items in the entire tree beneath a directory, but this is a quota for the number of allowed items in each specific directory in the filesystem. Ie. you may have a current quota of 15 items for a directory tree, but the new quota only allows each directory to have 5 items.
          Hide
          Boris Shkolnik added a comment -

          Non code questions:
          1. why do we need limit on minimum component length.

          Some code notes:
          1. Please use @Override when overriding methods.
          2. verifyFSLimits should throw FSLimitException (most specific exception).
          Also update the java doc for this method.
          3. Do we really need so specific exceptions, can't we just use FSLimitException with different messages.
          4. Please use junit 4 for test.

          Show
          Boris Shkolnik added a comment - Non code questions: 1. why do we need limit on minimum component length. Some code notes: 1. Please use @Override when overriding methods. 2. verifyFSLimits should throw FSLimitException (most specific exception). Also update the java doc for this method. 3. Do we really need so specific exceptions, can't we just use FSLimitException with different messages. 4. Please use junit 4 for test.
          Hide
          Daryn Sharp added a comment -

          Minimum component length isn't strictly needed. I only added it for symmetrical completeness. I'll remove if it's deemed completely superfluous.

          Added @Override annotations.

          Converted to junit 4.

          I'd prefer to keep specific exception types to make it easier for client code to differentiate between errors (parsing strings is fragile). As currently implemented, clients can generically catch FSLimitException, or a specific limit exception. The other quota exceptions are modeled this way. Keep in mind that Pig wants to be able to differentiate errors.

          Thoughts?

          Show
          Daryn Sharp added a comment - Minimum component length isn't strictly needed. I only added it for symmetrical completeness. I'll remove if it's deemed completely superfluous. Added @Override annotations. Converted to junit 4. I'd prefer to keep specific exception types to make it easier for client code to differentiate between errors (parsing strings is fragile). As currently implemented, clients can generically catch FSLimitException, or a specific limit exception. The other quota exceptions are modeled this way. Keep in mind that Pig wants to be able to differentiate errors. Thoughts?
          Hide
          Jakob Homan added a comment -

          As far as I can tell, there is an existing quota for all the items in the entire tree beneath a directory, but this is a quota for the number of allowed items in each specific directory in the filesystem. Ie. you may have a current quota of 15 items for a directory tree, but the new quota only allows each directory to have 5 items.

          I share Boris' concern over this. This feature seems like a refinement to the namespace quota, but is being implemented completely separately from it. This may well lead to confusion on the users' part and annoyance for Ops. Let's discuss this further before this is committed.

          Show
          Jakob Homan added a comment - As far as I can tell, there is an existing quota for all the items in the entire tree beneath a directory, but this is a quota for the number of allowed items in each specific directory in the filesystem. Ie. you may have a current quota of 15 items for a directory tree, but the new quota only allows each directory to have 5 items. I share Boris' concern over this. This feature seems like a refinement to the namespace quota, but is being implemented completely separately from it. This may well lead to confusion on the users' part and annoyance for Ops. Let's discuss this further before this is committed.
          Hide
          dhruba borthakur added a comment -

          is there a way to integrate this into the directory-quota mechanism? That will be nice.

          Show
          dhruba borthakur added a comment - is there a way to integrate this into the directory-quota mechanism? That will be nice.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12473710/HDFS-1751-2.patch
          against trunk revision 1081580.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12473710/HDFS-1751-2.patch against trunk revision 1081580. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/260//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          I believe this is a feature requested by our internal ops. The limits default to off, so it shouldn't be an annoyance to ops because they would be the one that define it.

          Please help me understand the more desirable implementation. As posted, it does throw a quota exception. However, the checks are not in the actual quota method itself because it's too deep. Too many other calls funnel through that quota method with the quota flag enabled. This would cause those non-directoy manipulating commands to fail if pre-existing paths exceed the limits. By popping it up one level in the call stack, it's ensured that the checks will only apply to directory-level operations.

          Would it be preferable that I derive the exceptions from NSQuotaExceededException instead of QuotaExceededException?

          Show
          Daryn Sharp added a comment - I believe this is a feature requested by our internal ops. The limits default to off, so it shouldn't be an annoyance to ops because they would be the one that define it. Please help me understand the more desirable implementation. As posted, it does throw a quota exception. However, the checks are not in the actual quota method itself because it's too deep. Too many other calls funnel through that quota method with the quota flag enabled. This would cause those non-directoy manipulating commands to fail if pre-existing paths exceed the limits. By popping it up one level in the call stack, it's ensured that the checks will only apply to directory-level operations. Would it be preferable that I derive the exceptions from NSQuotaExceededException instead of QuotaExceededException?
          Hide
          Boris Shkolnik added a comment -

          Ok, so my understanding is that these changes allow to impose some cluster wide limitations on the length of the names and number of items in a directory.
          Unlike quota system - which is per directory.

          I still cannot imagine a case where we need a minimum name length check.

          Couple more comments:
          1. Could you see if you can use mock() for FSNamesystem instead of changing the constructor for FSDirectory.
          2. in your tests seems like testMinComponentsAndMaxDirContents() test covers all the cases. Do we really need the others (testNoLimits, testMinComponentLength, testMaxComponentLength and testMaxDirContents).

          Show
          Boris Shkolnik added a comment - Ok, so my understanding is that these changes allow to impose some cluster wide limitations on the length of the names and number of items in a directory. Unlike quota system - which is per directory. I still cannot imagine a case where we need a minimum name length check. Couple more comments: 1. Could you see if you can use mock() for FSNamesystem instead of changing the constructor for FSDirectory. 2. in your tests seems like testMinComponentsAndMaxDirContents() test covers all the cases. Do we really need the others (testNoLimits, testMinComponentLength, testMaxComponentLength and testMaxDirContents).
          Hide
          Jakob Homan added a comment -

          My main concern with this patch centers around the maximum number of files in a directory ability quota that is being added. This is a type of quota (a subset of the namespace quota) but is being treated completely differently from that quota. For instance, the maximum number of files is specified via a config param: dfs.namenode.fs-limits.max-directory-contents, whereas none of the other quotas are settable via config. The config name itself is problematic: dfs.namenode.fs-limits.max-directory-contents. Contents is ambiguous: bytes, namespace, etc? It should be something along per-directory-nsquota.

          Moreover, this quota is not settable via the standard quotable commands, nor is it documented at all in the forrest documents.

          I'm actually a fan of the specific types of implementations. We'll get a big win from strongly typed exceptions instead of string-ly typed ones.

          If we're going to introduce a new quota, it should follow the patterns of the previous ones.

          Show
          Jakob Homan added a comment - My main concern with this patch centers around the maximum number of files in a directory ability quota that is being added. This is a type of quota (a subset of the namespace quota) but is being treated completely differently from that quota. For instance, the maximum number of files is specified via a config param: dfs.namenode.fs-limits.max-directory-contents, whereas none of the other quotas are settable via config. The config name itself is problematic: dfs.namenode.fs-limits.max-directory-contents. Contents is ambiguous: bytes, namespace, etc? It should be something along per-directory-nsquota. Moreover, this quota is not settable via the standard quotable commands, nor is it documented at all in the forrest documents. I'm actually a fan of the specific types of implementations. We'll get a big win from strongly typed exceptions instead of string-ly typed ones. If we're going to introduce a new quota, it should follow the patterns of the previous ones.
          Hide
          Boris Shkolnik added a comment -

          This seems to be a bit different kind of limitation. It is a cluster wide setting and it is separate from quota. Quotas are per specific entity - user, directory and so on. If the operator of a cluster needs to impose such a cluster wide limitation he/she may want to use this new configuration. And that's why it needs to be done thru config. So we shouldn't view this setting as a quota at all.

          I agree with the other Jacob's comment - regarding the name of the config setting.
          If you guys think using specific types of exceptions is more preferable - sure, go ahead.

          Show
          Boris Shkolnik added a comment - This seems to be a bit different kind of limitation. It is a cluster wide setting and it is separate from quota. Quotas are per specific entity - user, directory and so on. If the operator of a cluster needs to impose such a cluster wide limitation he/she may want to use this new configuration. And that's why it needs to be done thru config. So we shouldn't view this setting as a quota at all. I agree with the other Jacob's comment - regarding the name of the config setting. If you guys think using specific types of exceptions is more preferable - sure, go ahead.
          Hide
          Daryn Sharp added a comment -

          Removed minimum component length. Switched to using a mock object. Changed option to "max-directory-items". Please let me know if the name is acceptable.

          What do I need to do for forrest docs? I can see the settings documented in the build/docs/hdfs-default.html.

          I would prefer to keep the tests that individually enable the limits in addition to the test that enables both. This will make it much easier to know what is broken if the test fails. I've never been told I had too many tests, but I will remove if I must.

          Show
          Daryn Sharp added a comment - Removed minimum component length. Switched to using a mock object. Changed option to "max-directory-items". Please let me know if the name is acceptable. What do I need to do for forrest docs? I can see the settings documented in the build/docs/hdfs-default.html. I would prefer to keep the tests that individually enable the limits in addition to the test that enables both. This will make it much easier to know what is broken if the test fails. I've never been told I had too many tests, but I will remove if I must.
          Hide
          Daryn Sharp added a comment -

          Add comment regarding how rename interacts with quota and fs limit checks.

          Show
          Daryn Sharp added a comment - Add comment regarding how rename interacts with quota and fs limit checks.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12473850/HDFS-1751-3.patch
          against trunk revision 1082263.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestFileConcurrentReader
          org.apache.hadoop.hdfs.TestSafeMode
          org.apache.hadoop.hdfs.TestWriteConfigurationToDFS

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12473850/HDFS-1751-3.patch against trunk revision 1082263. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestFileConcurrentReader org.apache.hadoop.hdfs.TestSafeMode org.apache.hadoop.hdfs.TestWriteConfigurationToDFS -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/263//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12473855/HDFS-1751-4.patch
          against trunk revision 1082263.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestNodeCount

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12473855/HDFS-1751-4.patch against trunk revision 1082263. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestNodeCount -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/265//console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          can somebody pl explain the use-case why we need to limit the maximum number of files in a specific directory? Unlike other filesystems (ext3, vxfs, etc), HDFS does not have the concept of indirect blocks, thus there is no overhead from having all the files in one directory versus having the same number of files spread out in different directories.

          The only resource limitation I can think about is getListStatus() on the directory could return large number of files, but since this is now handled via an iterative getListStatus RPC, this should not be a problem.

          Show
          dhruba borthakur added a comment - can somebody pl explain the use-case why we need to limit the maximum number of files in a specific directory? Unlike other filesystems (ext3, vxfs, etc), HDFS does not have the concept of indirect blocks, thus there is no overhead from having all the files in one directory versus having the same number of files spread out in different directories. The only resource limitation I can think about is getListStatus() on the directory could return large number of files, but since this is now handled via an iterative getListStatus RPC, this should not be a problem.
          Hide
          Daryn Sharp added a comment -

          Although the fs itself has no limitations, there is still value in having the flexibility to establish limits. I believe the primary use-case is to simply prevent users/scripts from inadvertently or maliciously creating extremely long paths and/or extremely full directories. It's intended as an optional setting that will default to off. I will gather more details if you continue to feel that the feature is dubious.

          Show
          Daryn Sharp added a comment - Although the fs itself has no limitations, there is still value in having the flexibility to establish limits. I believe the primary use-case is to simply prevent users/scripts from inadvertently or maliciously creating extremely long paths and/or extremely full directories. It's intended as an optional setting that will default to off. I will gather more details if you continue to feel that the feature is dubious.
          Hide
          Robert Chansler added a comment -

          This feature was suggested after a couple of incidents where user applications exhausted some resource by behaving in a way that was deeply wrong and (probably) unintended. Can HDFS fault bad jobs cheaply? Whereas quotas are about playing well with others (sharing the commons), these (global) limits are intended to defend against reckless accidents.

          And not only is the file system protected, but these rules might benefit a user community that shares a quota, and the job system that has its own sensitivities to accidental behavior.

          Show
          Robert Chansler added a comment - This feature was suggested after a couple of incidents where user applications exhausted some resource by behaving in a way that was deeply wrong and (probably) unintended. Can HDFS fault bad jobs cheaply? Whereas quotas are about playing well with others (sharing the commons), these (global) limits are intended to defend against reckless accidents. And not only is the file system protected, but these rules might benefit a user community that shares a quota, and the job system that has its own sensitivities to accidental behavior.
          Hide
          Jakob Homan added a comment -

          A quota by any other name would still limit the number of objects that can be created in the namespace just as sweetly.

          The following code from the patch is very telling:

             private <T extends INode> T addChild(INode[] pathComponents, int pos,
                 T child, long childDiskspace, boolean inheritPermission,
                 boolean checkQuota) throws QuotaExceededException {
          +	// The filesystem limits are not really quotas, so this check may appear
          +	// odd.  It's because a rename operation deletes the src, tries to add
          +	// to the dest, if that fails, re-adds the src from whence it came.
          +	// The rename code disables the quota when it's restoring to the
          +	// original location becase a quota violation would cause the the item
          +	// to go "poof".  The fs limits must be disabled for the same reason.
          

          Essentially it's saying we're not doing quota checking, except that we're throwing a FSLimitException (which extends and is therefore a QuotaExceededException) and we have to do the exact same workaround that the quota check has to do in this situation - but we're still not a quota check.

          after a couple of incidents where user applications exhausted some resource by behaving in a way that was deeply wrong and (probably) unintended.

          Could these incidents have been prevented by judicious use of the existing namespace quota?

          My concern remains that this quota is implemented separately and in parallel from the main quota checking mechanism, adding more state, more paths and more opportunity for bugs. Could we accomplish the same thing by defining a default NSQuota for new directories and allowing this to be specified via configuration file or command line (as with the other quotas)?

          I'm -1 on this patch as it stands.

          Show
          Jakob Homan added a comment - A quota by any other name would still limit the number of objects that can be created in the namespace just as sweetly. The following code from the patch is very telling: private <T extends INode> T addChild(INode[] pathComponents, int pos, T child, long childDiskspace, boolean inheritPermission, boolean checkQuota) throws QuotaExceededException { + // The filesystem limits are not really quotas, so this check may appear + // odd. It's because a rename operation deletes the src, tries to add + // to the dest, if that fails, re-adds the src from whence it came. + // The rename code disables the quota when it's restoring to the + // original location becase a quota violation would cause the the item + // to go "poof" . The fs limits must be disabled for the same reason. Essentially it's saying we're not doing quota checking, except that we're throwing a FSLimitException (which extends and is therefore a QuotaExceededException) and we have to do the exact same workaround that the quota check has to do in this situation - but we're still not a quota check. after a couple of incidents where user applications exhausted some resource by behaving in a way that was deeply wrong and (probably) unintended. Could these incidents have been prevented by judicious use of the existing namespace quota? My concern remains that this quota is implemented separately and in parallel from the main quota checking mechanism, adding more state, more paths and more opportunity for bugs. Could we accomplish the same thing by defining a default NSQuota for new directories and allowing this to be specified via configuration file or command line (as with the other quotas)? I'm -1 on this patch as it stands.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Could these incidents have been prevented by judicious use of the existing namespace quota?

          My concern remains that this quota is implemented separately and in parallel from the main quota checking mechanism, adding more state, more paths and more opportunity for bugs. Could we accomplish the same thing by defining a default NSQuota for new directories and allowing this to be specified via configuration file or command line (as with the other quotas)?

          NSQuota on a directory limits the total numbers of items under that directory. So the default NSQuota proposal seems not working.

          Show
          Tsz Wo Nicholas Sze added a comment - Could these incidents have been prevented by judicious use of the existing namespace quota? My concern remains that this quota is implemented separately and in parallel from the main quota checking mechanism, adding more state, more paths and more opportunity for bugs. Could we accomplish the same thing by defining a default NSQuota for new directories and allowing this to be specified via configuration file or command line (as with the other quotas)? NSQuota on a directory limits the total numbers of items under that directory. So the default NSQuota proposal seems not working.
          Hide
          Jakob Homan added a comment -

          In a directory with no subdirectories, this quota and NSQuota would be equivalent. Without an accompanying NSQuota limit, the system would still be vulnerable to a user creating 10 million directories, which would hurt the system just as much.

          Show
          Jakob Homan added a comment - In a directory with no subdirectories, this quota and NSQuota would be equivalent. Without an accompanying NSQuota limit, the system would still be vulnerable to a user creating 10 million directories, which would hurt the system just as much.
          Hide
          Jakob Homan added a comment -

          I should mention that I'm vetoing the idea of this, just the implementation. A NSPerDirQuota, handled just like any other quota plus the ability to have a default specified via configs, would be a reasonable alternative.

          Show
          Jakob Homan added a comment - I should mention that I'm vetoing the idea of this, just the implementation. A NSPerDirQuota, handled just like any other quota plus the ability to have a default specified via configs, would be a reasonable alternative.
          Hide
          Allen Wittenauer added a comment -

          > This feature was suggested after a couple of incidents where user applications exhausted
          > some resource by behaving in a way that was deeply wrong and (probably) unintended.
          > Can HDFS fault bad jobs cheaply?

          Artificial limits such as these are very simple to defeat though. Never underestimate a determined user. I can easily see the end result being that the user will just create X directories, and then create Y files under those directories using a hash structure. Whatever resource was being exhausted will likely continue to be exhausted.

          (The only resource I can imagine being a problem is if a job has been given so many files as part of its input path that it blows the heap. But this is the wrong place to implement that type of fix...)

          Show
          Allen Wittenauer added a comment - > This feature was suggested after a couple of incidents where user applications exhausted > some resource by behaving in a way that was deeply wrong and (probably) unintended. > Can HDFS fault bad jobs cheaply? Artificial limits such as these are very simple to defeat though. Never underestimate a determined user. I can easily see the end result being that the user will just create X directories, and then create Y files under those directories using a hash structure. Whatever resource was being exhausted will likely continue to be exhausted. (The only resource I can imagine being a problem is if a job has been given so many files as part of its input path that it blows the heap. But this is the wrong place to implement that type of fix...)
          Hide
          Jakob Homan added a comment -

          s/I'm vetoing/I'm not vetoing/g. Sigh.

          Show
          Jakob Homan added a comment - s/I'm vetoing/I'm not vetoing/g. Sigh.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Artificial limits such as these are very simple to defeat though. Never underestimate a determined user. I can easily see the end result being that the user will just create X directories, and then create Y files under those directories using a hash structure. Whatever resource was being exhausted will likely continue to be exhausted.

          Allen, in this case, NSQuota is already good enough.

          Intrinsic limits are for preventing bugs in user codes. We observe this problem from time to time.

          Show
          Tsz Wo Nicholas Sze added a comment - Artificial limits such as these are very simple to defeat though. Never underestimate a determined user. I can easily see the end result being that the user will just create X directories, and then create Y files under those directories using a hash structure. Whatever resource was being exhausted will likely continue to be exhausted. Allen, in this case, NSQuota is already good enough. Intrinsic limits are for preventing bugs in user codes. We observe this problem from time to time.
          Hide
          Daryn Sharp added a comment -

          (I'll avoid the design dispute, and explain why the implementation is the way it is)

          The main quota method, updateCount(), is too low in the call stack and is designed to handle disk and inode changes. Allowing updateCount() to perform the limit checks will cause issues because too many other operations call it, and updateCount() can't discern why it's been invoked. A few examples of issues that would occur:
          1) adding or removing a block from a file will fail if the directory item limit has been reached
          2) changing the replication factor on a pre-existing file that exceeds either of the limits will fail
          3) updating the disk quota counts via updateSpaceConsumed() will fail if either of the limits are reached
          ...etc...

          To address these issues, at a minimum, a boolean will need to be passed to updateCount() to indicate if a filesystem directory update is occurring (ie. only addChild() will pass true). The new INode will also need to be passed to updateCount to check the component length. This would be a more complex change, that places an undue burden on the callers of updateCount() to pass the right args, just to avoid having addChild perform the fs limit checks.

          Please let me know if I'm overlooking anything in my analysis.

          Show
          Daryn Sharp added a comment - (I'll avoid the design dispute, and explain why the implementation is the way it is) The main quota method, updateCount(), is too low in the call stack and is designed to handle disk and inode changes. Allowing updateCount() to perform the limit checks will cause issues because too many other operations call it, and updateCount() can't discern why it's been invoked. A few examples of issues that would occur: 1) adding or removing a block from a file will fail if the directory item limit has been reached 2) changing the replication factor on a pre-existing file that exceeds either of the limits will fail 3) updating the disk quota counts via updateSpaceConsumed() will fail if either of the limits are reached ...etc... To address these issues, at a minimum, a boolean will need to be passed to updateCount() to indicate if a filesystem directory update is occurring (ie. only addChild() will pass true). The new INode will also need to be passed to updateCount to check the component length. This would be a more complex change, that places an undue burden on the callers of updateCount() to pass the right args, just to avoid having addChild perform the fs limit checks. Please let me know if I'm overlooking anything in my analysis.
          Hide
          Boris Shkolnik added a comment -

          Using two different mechanisms makes sense to me. It is like 'ulimit' and quota on a unix system. They solve different problems.
          In regards to the quote mentioned by Jakob, it looks like this comment is related to the 'reusing' of the boolean flag only.

          Show
          Boris Shkolnik added a comment - Using two different mechanisms makes sense to me. It is like 'ulimit' and quota on a unix system. They solve different problems. In regards to the quote mentioned by Jakob, it looks like this comment is related to the 'reusing' of the boolean flag only.
          Hide
          Daryn Sharp added a comment -

          Yes, the comment is regarding the "reuse" of the checkQuota flag. The lines that immediately follow are:
          + if (checkQuota)

          { + verifyFsLimits(pathComponents, pos, child); + }
          Show
          Daryn Sharp added a comment - Yes, the comment is regarding the "reuse" of the checkQuota flag. The lines that immediately follow are: + if (checkQuota) { + verifyFsLimits(pathComponents, pos, child); + }
          Hide
          Jakob Homan added a comment -

          First off, this JIRA is named incorrectly. The intrinsic limits on HDFS files and directories are a function of namenode memory and our ability to address inodes. What's being proposed is an optional setting, so by definition, it's not intrinsic. Instead, what's being suggested is a quota (synonym: limit) on the number of files created in a single directory, which translated to class-speak would be NSPerDirectoryFileQuota. A case can be made that this would be a good quota to have, in addition to the existing NSQuota. Nestled cozily amongst the other quotas, it would make sense. But implemented with a different name and separated in the code from its fellows, it's error prone and confusing.

          (I'll avoid the design dispute, and explain why the implementation is the way it is)

          The actual implementation is much less important than how this is exposed to the user. Adding an extra parameter to updateCount is fine if it is in aid of getting the correct implementation of this feature.

          It is like 'ulimit' and quota on a unix system. They solve different problems.

          Not really. I was fully expecting another JIRA to implement ulimit-like functionality to prevent users from opening millions of files at the same, which may be reasonable but is orthogonal to this.

          Show
          Jakob Homan added a comment - First off, this JIRA is named incorrectly. The intrinsic limits on HDFS files and directories are a function of namenode memory and our ability to address inodes. What's being proposed is an optional setting, so by definition, it's not intrinsic. Instead, what's being suggested is a quota (synonym: limit) on the number of files created in a single directory, which translated to class-speak would be NSPerDirectoryFileQuota. A case can be made that this would be a good quota to have, in addition to the existing NSQuota. Nestled cozily amongst the other quotas, it would make sense. But implemented with a different name and separated in the code from its fellows, it's error prone and confusing. (I'll avoid the design dispute, and explain why the implementation is the way it is) The actual implementation is much less important than how this is exposed to the user. Adding an extra parameter to updateCount is fine if it is in aid of getting the correct implementation of this feature. It is like 'ulimit' and quota on a unix system. They solve different problems. Not really. I was fully expecting another JIRA to implement ulimit-like functionality to prevent users from opening millions of files at the same, which may be reasonable but is orthogonal to this.
          Hide
          Robert Chansler added a comment -

          First, I must ‘fess up that “intrinsic” was my suggestion. I understand Jakob’s point, but defend my choice as well representing the intention that the limit be independent of any particular directory or user. The whole point is avoid exhausting resources by foolish/mistaken activity. And quota is a resource, which is why this facility needs to be independent. Testing the limits (“intrinsic” or otherwise) at the same place as the quotas are tested make sense as the same logic applies if the test fails. (The information returned to the client should distinguish the faults.) But the intent would be defeated if there were anything like per-directory configuration. Not to mention that the implementation would be more complex, extra space would be consumed, and the complexity presented to users and administrators would be increased.

          Show
          Robert Chansler added a comment - First, I must ‘fess up that “intrinsic” was my suggestion. I understand Jakob’s point, but defend my choice as well representing the intention that the limit be independent of any particular directory or user. The whole point is avoid exhausting resources by foolish/mistaken activity. And quota is a resource, which is why this facility needs to be independent. Testing the limits (“intrinsic” or otherwise) at the same place as the quotas are tested make sense as the same logic applies if the test fails. (The information returned to the client should distinguish the faults.) But the intent would be defeated if there were anything like per-directory configuration. Not to mention that the implementation would be more complex, extra space would be consumed, and the complexity presented to users and administrators would be increased.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Jakob, any respond to Robert's last comment?

          Show
          Tsz Wo Nicholas Sze added a comment - Jakob, any respond to Robert's last comment ?
          Hide
          Jakob Homan added a comment -

          Nicholas - I had sent you, Rob and Daryn an email letting you know I'd be out of town this last week and apologizing for disappearing in the middle of the discussion. I do hope you got it. Still catching up.

          I think the disagreement at this point is between per-dir or globally defined. Per-dir would be more expensive, but would provide more utility than globally configured (I also think this functionality may be subject to feature creep that per-dir would preempt). A compromise would be to go with globally configured now and if this does creep, to move its implementation to per-dir later on (shoehorning in backwards compatibility). I would be fine with that.

          However, no one has responded to my concern about code duplication and we're-really-not-a-quota-even-though-we-are concerns. If globally configured can be done with a correct design of no code duplication and cleanly integrated with the regular quota code (to avoid duplication and get better code coverage), I'll be happy. This is also more future proof if the need for per-dir arises.

          Show
          Jakob Homan added a comment - Nicholas - I had sent you, Rob and Daryn an email letting you know I'd be out of town this last week and apologizing for disappearing in the middle of the discussion. I do hope you got it. Still catching up. I think the disagreement at this point is between per-dir or globally defined. Per-dir would be more expensive, but would provide more utility than globally configured (I also think this functionality may be subject to feature creep that per-dir would preempt). A compromise would be to go with globally configured now and if this does creep, to move its implementation to per-dir later on (shoehorning in backwards compatibility). I would be fine with that. However, no one has responded to my concern about code duplication and we're-really-not-a-quota-even-though-we-are concerns. If globally configured can be done with a correct design of no code duplication and cleanly integrated with the regular quota code (to avoid duplication and get better code coverage), I'll be happy. This is also more future proof if the need for per-dir arises.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > Nicholas - I had sent you, Rob and Daryn an email letting you know ...

          Sure, I got your emails. I believe you are already back for awhile. It is inefficient for a community to wait for a person. Also, I recall that you seems preferring communication on JIRA or public mailing list. No?

          Anyway, I am glad that you are back. Hope you could spend time on this.

          Show
          Tsz Wo Nicholas Sze added a comment - > Nicholas - I had sent you, Rob and Daryn an email letting you know ... Sure, I got your emails. I believe you are already back for awhile. It is inefficient for a community to wait for a person. Also, I recall that you seems preferring communication on JIRA or public mailing list. No? Anyway, I am glad that you are back. Hope you could spend time on this.
          Hide
          Daryn Sharp added a comment -

          I'd like to please request clarification on code duplication. This usually implies copy-n-paste coding, but that's not the case here. This is a fs limit so the check is highly specific to addChild(...) and the code is very trivial when invoked from that method.

          I could change the the limit exception to not derive from the base quota exception, but that will cause a ripple of method signature changes and may require the addition of catch blocks up the callers' chains. OTOH, having the limit exceptions derive from a quota exception will make it much easier if/when per-directory limits are added.

          Show
          Daryn Sharp added a comment - I'd like to please request clarification on code duplication. This usually implies copy-n-paste coding, but that's not the case here. This is a fs limit so the check is highly specific to addChild(...) and the code is very trivial when invoked from that method. I could change the the limit exception to not derive from the base quota exception, but that will cause a ripple of method signature changes and may require the addition of catch blocks up the callers' chains. OTOH, having the limit exceptions derive from a quota exception will make it much easier if/when per-directory limits are added.
          Hide
          Jakob Homan added a comment -

          I'm good with the fslimit deriving from quota exception (again, because this is a quota check). If we're going to go with global limits for now, this is probably the best solution. -1 withdrawn. No +1 given; someone needs to do a full review of the latest patch.

          Show
          Jakob Homan added a comment - I'm good with the fslimit deriving from quota exception (again, because this is a quota check). If we're going to go with global limits for now, this is probably the best solution. -1 withdrawn. No +1 given; someone needs to do a full review of the latest patch.
          Hide
          Daryn Sharp added a comment -

          Removed conf option for min path length since that code was withdrawn. Standardize use of content vs item.

          Show
          Daryn Sharp added a comment - Removed conf option for min path length since that code was withdrawn. Standardize use of content vs item.
          Hide
          John George added a comment -

          The code look pretty good to me.

          "name" in "dfs.namenode.fs-limits.max-component-length" is added multiple times in hdfs-default.xml.

          Like we discussed offline, I had a comment as to whether pathComponents[pos-1] in verifyFsLimits can ever be the root inode, but like you said - since this is in "addChild" routine, it always has atleast one parent and so "pos-1" is valid.

          As a whole, the code looks like its doing what you describe it should do.

          Show
          John George added a comment - The code look pretty good to me. "name" in "dfs.namenode.fs-limits.max-component-length" is added multiple times in hdfs-default.xml. Like we discussed offline, I had a comment as to whether pathComponents [pos-1] in verifyFsLimits can ever be the root inode, but like you said - since this is in "addChild" routine, it always has atleast one parent and so "pos-1" is valid. As a whole, the code looks like its doing what you describe it should do.
          Hide
          Daryn Sharp added a comment -

          Good catch. Removed duplicate name tag.

          Show
          Daryn Sharp added a comment - Good catch. Removed duplicate name tag.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476025/HDFS-1751-5.patch
          against trunk revision 1091131.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.datanode.TestBlockReport

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476025/HDFS-1751-5.patch against trunk revision 1091131. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.datanode.TestBlockReport -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/341//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          The tests that failed are not related to this bug and have a history of failing.

          Show
          Daryn Sharp added a comment - The tests that failed are not related to this bug and have a history of failing.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476035/HDFS-1751-6.patch
          against trunk revision 1091131.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestFileAppend4
          org.apache.hadoop.hdfs.TestLargeBlock
          org.apache.hadoop.hdfs.TestWriteConfigurationToDFS

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476035/HDFS-1751-6.patch against trunk revision 1091131. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestFileAppend4 org.apache.hadoop.hdfs.TestLargeBlock org.apache.hadoop.hdfs.TestWriteConfigurationToDFS -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/342//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          May I please have a review/commit of this aging feature?

          Show
          Daryn Sharp added a comment - May I please have a review/commit of this aging feature?
          Hide
          Boris Shkolnik added a comment -

          +1.

          Show
          Boris Shkolnik added a comment - +1.
          Hide
          Boris Shkolnik added a comment -

          Committed to trunk. Thanks Daryn.

          Show
          Boris Shkolnik added a comment - Committed to trunk. Thanks Daryn.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #600 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/600/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #600 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/600/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #644 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/644/)
          HDFS-1751. Intrinsic limits for HDFS files, directories

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #644 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/644/ ) HDFS-1751 . Intrinsic limits for HDFS files, directories

            People

            • Assignee:
              Daryn Sharp
              Reporter:
              Daryn Sharp
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development