Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3755

Creating an already-open-for-write file with overwrite=true fails

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: 3.0.0, 2.0.2-alpha
    • Component/s: namenode
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      This is an incompatible change: Before this change, if a file is already open for write by one client, and another client calls fs.create() with overwrite=true, an AlreadyBeingCreatedException is thrown. After this change, the file will be deleted and the new file will be created successfully.
      Show
      This is an incompatible change: Before this change, if a file is already open for write by one client, and another client calls fs.create() with overwrite=true, an AlreadyBeingCreatedException is thrown. After this change, the file will be deleted and the new file will be created successfully.

      Description

      If a file is already open for write by one client, and another client calls fs.create() with overwrite=true, the file should be deleted and the new file successfully created. Instead, it is currently throwing AlreadyBeingCreatedException.

      This is a regression since branch-1.

      1. hdfs-3755.txt
        4 kB
        Todd Lipcon
      2. hdfs-3755.txt
        5 kB
        Todd Lipcon

        Activity

        Hide
        Todd Lipcon added a comment -

        Attached patch fixes the issue.

        The behavior with this patch is still a little strange: if you call create(overwrite=false) on a file open for write but with expired soft-lease, you will trigger lease recovery of that file because we still call recoverLeaseInternal even for overwrite=false. Fixing this would involve a bit more surgery of the FSNamesystem code, so I wanted to fix the behavioral regression here and leave further improvements for another JIRA.

        Show
        Todd Lipcon added a comment - Attached patch fixes the issue. The behavior with this patch is still a little strange: if you call create(overwrite=false) on a file open for write but with expired soft-lease, you will trigger lease recovery of that file because we still call recoverLeaseInternal even for overwrite=false . Fixing this would involve a bit more surgery of the FSNamesystem code, so I wanted to fix the behavioral regression here and leave further improvements for another JIRA.
        Hide
        Suresh Srinivas added a comment -

        Todd, I am not sure if this is the right behavior. Perhaps branch-1 behavior itself is incorrect. When another client is holding the lease, I do not think we should allow create with overwrite=true to delete the file.

        Nicholas, Konstantin or Hairong any comments on this?

        Show
        Suresh Srinivas added a comment - Todd, I am not sure if this is the right behavior. Perhaps branch-1 behavior itself is incorrect. When another client is holding the lease, I do not think we should allow create with overwrite=true to delete the file. Nicholas, Konstantin or Hairong any comments on this?
        Hide
        Todd Lipcon added a comment -

        Todd, I am not sure if this is the right behavior. Perhaps branch-1 behavior itself is incorrect. When another client is holding the lease, I do not think we should allow create with overwrite=true to delete the file.

        Why not? Doesn't "overwrite" imply that you wish to delete prior to creating a new file? In the same way that we allow deleting an open file, I think we should allow the atomic delete-and-recreate that overwrite implies.

        Show
        Todd Lipcon added a comment - Todd, I am not sure if this is the right behavior. Perhaps branch-1 behavior itself is incorrect. When another client is holding the lease, I do not think we should allow create with overwrite=true to delete the file. Why not? Doesn't "overwrite" imply that you wish to delete prior to creating a new file? In the same way that we allow deleting an open file, I think we should allow the atomic delete-and-recreate that overwrite implies.
        Hide
        Suresh Srinivas added a comment -

        Why not? Doesn't "overwrite" imply that you wish to delete prior to creating a new file?

        That is a valid point. That is sort of what I was thinking after I posted my comment.

        Lease ensures a single writer to a file in HDFS. By that token I agree that deletion should be allowed. With that behavior, we may have two writers writing the same file name/path. One of the writers would eventually fail.

        Show
        Suresh Srinivas added a comment - Why not? Doesn't "overwrite" imply that you wish to delete prior to creating a new file? That is a valid point. That is sort of what I was thinking after I posted my comment. Lease ensures a single writer to a file in HDFS. By that token I agree that deletion should be allowed. With that behavior, we may have two writers writing the same file name/path. One of the writers would eventually fail.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12538953/hdfs-3755.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2940//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2940//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538953/hdfs-3755.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2940//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2940//console This message is automatically generated.
        Hide
        Todd Lipcon added a comment -

        Comparing this code to branch-1, it seems like the difference is actually client-side, not in the NameNode handling of create:

        In branch-1's DFSClient, we specifically set up a retry policy for create() when it throws AlreadyBeingCreatedException. The policy is to retry once every LEASE_SOFTLIMIT_PERIOD up to 5 times (i.e 5 minutes). Assumedly this is so that, if a writer fails, another writer can call append() and take over from it, even if it first makes this call before the old writer's soft limit has expired. This behavior goes way back to HADOOP-1263 circa 2007. We removed this when we moved around a lot of the retry functionality in DFSClient into the IPC layer during development of HA.

        I think the right thing to do here is:
        a) commit this patch, since, as described above, it doesn't make sense to have to recover the lease when you're going to delete the file anyway for overwrite.
        b) think about whether the above transparent-retry behavior for create() is actually what we want to expose to developers. I personally would rather the caller be responsible for retrying if it expects to be taking over a lease from a prior writer - otherwise a call which should be fast could retry up to 5 minutes. Since I can imagine there might be disagreement on this point, I'd propose take it to a separate JIRA.

        Show
        Todd Lipcon added a comment - Comparing this code to branch-1, it seems like the difference is actually client-side, not in the NameNode handling of create: In branch-1's DFSClient, we specifically set up a retry policy for create() when it throws AlreadyBeingCreatedException . The policy is to retry once every LEASE_SOFTLIMIT_PERIOD up to 5 times (i.e 5 minutes). Assumedly this is so that, if a writer fails, another writer can call append() and take over from it, even if it first makes this call before the old writer's soft limit has expired. This behavior goes way back to HADOOP-1263 circa 2007. We removed this when we moved around a lot of the retry functionality in DFSClient into the IPC layer during development of HA. I think the right thing to do here is: a) commit this patch, since, as described above, it doesn't make sense to have to recover the lease when you're going to delete the file anyway for overwrite. b) think about whether the above transparent-retry behavior for create() is actually what we want to expose to developers. I personally would rather the caller be responsible for retrying if it expects to be taking over a lease from a prior writer - otherwise a call which should be fast could retry up to 5 minutes. Since I can imagine there might be disagreement on this point, I'd propose take it to a separate JIRA.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        I think it iscorrect that fs.create() with overwrite=true could overwrite a being written file. However, users may expect to get an AlreadyBeingCreatedException. So let's mark this as an incompatible change.

        Show
        Tsz Wo Nicholas Sze added a comment - I think it iscorrect that fs.create() with overwrite=true could overwrite a being written file. However, users may expect to get an AlreadyBeingCreatedException. So let's mark this as an incompatible change.
        Hide
        Hairong Kuang added a comment -

        We do have a use case that a later running task may need to overwrite a file being written because the old write may die. The way we do it is to delete the file first and create a new one with the same path.

        With the "overwritten" semantics that Todd introduced, we could cut one RPC call to NameNode. This sounds a good semantics to have. We need to mark this change incompatible, though.

        Show
        Hairong Kuang added a comment - We do have a use case that a later running task may need to overwrite a file being written because the old write may die. The way we do it is to delete the file first and create a new one with the same path. With the "overwritten" semantics that Todd introduced, we could cut one RPC call to NameNode. This sounds a good semantics to have. We need to mark this change incompatible, though.
        Hide
        Aaron T. Myers added a comment -

        Note that this change doesn't introduce any new semantics per se. This patch just changes the user-visible behavior back to what it was in branch-1. It's fixing a bug that was inadvertently introduced in branch-2.

        Given that, I don't think that we need to mark this change as being incompatible, but I don't feel super strongly about this.

        Show
        Aaron T. Myers added a comment - Note that this change doesn't introduce any new semantics per se. This patch just changes the user-visible behavior back to what it was in branch-1. It's fixing a bug that was inadvertently introduced in branch-2. Given that, I don't think that we need to mark this change as being incompatible, but I don't feel super strongly about this.
        Hide
        Todd Lipcon added a comment -

        This patch just changes the user-visible behavior back to what it was in branch-1. It's fixing a bug that was inadvertently introduced in branch-2.

        True, this fixes a behavior regression in the case that the old writer had in fact lost its lease.

        BUT: the branch-1 behavior, if the old writer is in fact active and renewing its lease, would be that the "overwriter" client would fail with AlreadyBeingCreatedException after 5 minutes. So, this patch does change behavior in this case, but I believe in the correct direction.

        So, I'm in favor of marking it incompatible, so it shows up in the release notes, but putting it in branch-2 nonetheless.

        The question is whether we should also change this behavior in branch-1 itself. I think given the stability level of that branch, we should leave it alone.

        Show
        Todd Lipcon added a comment - This patch just changes the user-visible behavior back to what it was in branch-1. It's fixing a bug that was inadvertently introduced in branch-2. True, this fixes a behavior regression in the case that the old writer had in fact lost its lease. BUT: the branch-1 behavior, if the old writer is in fact active and renewing its lease, would be that the "overwriter" client would fail with AlreadyBeingCreatedException after 5 minutes. So, this patch does change behavior in this case, but I believe in the correct direction. So, I'm in favor of marking it incompatible, so it shows up in the release notes, but putting it in branch-2 nonetheless. The question is whether we should also change this behavior in branch-1 itself. I think given the stability level of that branch, we should leave it alone.
        Hide
        Aaron T. Myers added a comment -

        So, I'm in favor of marking it incompatible, so it shows up in the release notes, but putting it in branch-2 nonetheless.

        Good point. Makes sense.

        Show
        Aaron T. Myers added a comment - So, I'm in favor of marking it incompatible, so it shows up in the release notes, but putting it in branch-2 nonetheless. Good point. Makes sense.
        Hide
        Aaron T. Myers added a comment -

        Patch looks pretty good to me. One small comment, I think you should change the exception text you assert in the second assertExceptionContains call to something more specific, e.g. "No lease on /testfile".

        +1 once this is addressed.

        Show
        Aaron T. Myers added a comment - Patch looks pretty good to me. One small comment, I think you should change the exception text you assert in the second assertExceptionContains call to something more specific, e.g. "No lease on /testfile". +1 once this is addressed.
        Hide
        Todd Lipcon added a comment -

        Woops, sorry about that bad assert... new patch addresses that. I'll commit if it comes back clean from Jenkins.

        Show
        Todd Lipcon added a comment - Woops, sorry about that bad assert... new patch addresses that. I'll commit if it comes back clean from Jenkins.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12539724/hdfs-3755.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestFileConcurrentReader
        org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
        org.apache.hadoop.hdfs.TestDFSClientRetries
        org.apache.hadoop.hdfs.TestDatanodeBlockScanner

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2967//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2967//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539724/hdfs-3755.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestFileConcurrentReader org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics org.apache.hadoop.hdfs.TestDFSClientRetries org.apache.hadoop.hdfs.TestDatanodeBlockScanner +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2967//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2967//console This message is automatically generated.
        Hide
        Todd Lipcon added a comment -

        I don't think the above failures are related - all of them have been flaky of late for various other reasons.

        Show
        Todd Lipcon added a comment - I don't think the above failures are related - all of them have been flaky of late for various other reasons.
        Hide
        Aaron T. Myers added a comment -

        +1, the latest patch looks good to me.

        Show
        Aaron T. Myers added a comment - +1, the latest patch looks good to me.
        Hide
        Todd Lipcon added a comment -

        Committed to branch-2 and trunk. Thanks for reviews.

        Show
        Todd Lipcon added a comment - Committed to branch-2 and trunk. Thanks for reviews.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #2630 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2630/)
        HDFS-3755. Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2630 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2630/ ) HDFS-3755 . Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #2565 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2565/)
        HDFS-3755. Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2565 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2565/ ) HDFS-3755 . Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #2585 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2585/)
        HDFS-3755. Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2585 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2585/ ) HDFS-3755 . Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1130 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1130/)
        HDFS-3755. Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1130 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1130/ ) HDFS-3755 . Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1162 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1162/)
        HDFS-3755. Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1162 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1162/ ) HDFS-3755 . Creating an already-open-for-write file with overwrite=true fails. Contributed by Todd Lipcon. (Revision 1370937) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1370937 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Added release note.

        Show
        Tsz Wo Nicholas Sze added a comment - Added release note.
        Hide
        Suresh Srinivas added a comment - - edited

        Given a regression from branch-1 was fixed in this Jira, why is it incompatible?

        Show
        Suresh Srinivas added a comment - - edited Given a regression from branch-1 was fixed in this Jira, why is it incompatible?

          People

          • Assignee:
            Todd Lipcon
            Reporter:
            Todd Lipcon
          • Votes:
            0 Vote for this issue
            Watchers:
            14 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development