Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0
    • Component/s: build, test
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The MR hudson job is failing, looks like it's due to a test chmod'ing a build directory so the checkout can't clean the build dir.

      https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/549/console

      Building remotely on hadoop7
      hudson.util.IOException2: remote file operation failed: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk at hudson.remoting.Channel@2545938c:hadoop7
      at hudson.FilePath.act(FilePath.java:749)
      at hudson.FilePath.act(FilePath.java:735)
      at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:589)
      at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:537)
      at hudson.model.AbstractProject.checkout(AbstractProject.java:1116)
      at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:479)
      at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:411)
      at hudson.model.Run.run(Run.java:1324)
      at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
      at hudson.model.ResourceController.execute(ResourceController.java:88)
      at hudson.model.Executor.run(Executor.java:139)
      Caused by: java.io.IOException: Unable to delete /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001/attempt_20101230131139886_0001_m_000000_0

      1. mapreduce-2238.txt
        4 kB
        Todd Lipcon
      2. mapreduce-2238.txt
        12 kB
        Todd Lipcon
      3. mapreduce-2238.txt
        12 kB
        Todd Lipcon

        Issue Links

          Activity

          Eli Collins created issue -
          Hide
          Nigel Daley added a comment -

          Hudson slave runs as the user 'hudson'. Here is the directory listing that is failing. Why would the permissions on the dir be changed to 311?

          root@h7:/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001# ls -al
          total 36
          drwx------ 9 hudson hudson 4096 2010-12-30 13:14 .
          drwxr-xr-x 3 hudson hudson 4096 2010-12-30 13:14 ..
          d-wx--x--x 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000000_0
          drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000000_1
          drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000001_0
          drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000003_1
          drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000005_0
          drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000006_0
          drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000007_0
          root@h7:/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001# ls -al attempt_20101230131139886_0001_m_000000_0
          total 16
          d-wx--x--x 2 hudson hudson 4096 2010-12-30 13:11 .
          drwx------ 9 hudson hudson 4096 2010-12-30 13:14 ..
          -rw-r--r-- 1 hudson hudson  205 2010-12-30 13:11 log.index
          -rw-r--r-- 1 hudson hudson    0 2010-12-30 13:11 stderr
          -rw-r--r-- 1 hudson hudson 1687 2010-12-30 13:11 stdout
          
          Show
          Nigel Daley added a comment - Hudson slave runs as the user 'hudson'. Here is the directory listing that is failing. Why would the permissions on the dir be changed to 311? root@h7:/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001# ls -al total 36 drwx------ 9 hudson hudson 4096 2010-12-30 13:14 . drwxr-xr-x 3 hudson hudson 4096 2010-12-30 13:14 .. d-wx--x--x 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000000_0 drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000000_1 drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000001_0 drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000003_1 drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000005_0 drwx------ 2 hudson hudson 4096 2010-12-30 13:12 attempt_20101230131139886_0001_m_000006_0 drwx------ 2 hudson hudson 4096 2010-12-30 13:11 attempt_20101230131139886_0001_m_000007_0 root@h7:/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001# ls -al attempt_20101230131139886_0001_m_000000_0 total 16 d-wx--x--x 2 hudson hudson 4096 2010-12-30 13:11 . drwx------ 9 hudson hudson 4096 2010-12-30 13:14 .. -rw-r--r-- 1 hudson hudson 205 2010-12-30 13:11 log.index -rw-r--r-- 1 hudson hudson 0 2010-12-30 13:11 stderr -rw-r--r-- 1 hudson hudson 1687 2010-12-30 13:11 stdout
          Hide
          Todd Lipcon added a comment -

          My hunch is that this is caused by some kind of bug in the new Localizer.PermissionsHandler.setPermissions originally introduced by MAPREDUCE-842.

          Show
          Todd Lipcon added a comment - My hunch is that this is caused by some kind of bug in the new Localizer.PermissionsHandler.setPermissions originally introduced by MAPREDUCE-842 .
          Hide
          Greg Roelofs added a comment -

          Race condition/lack of locking?

          FWIW, I've seen this exact same bug several times when running "ant test" on an NFS tree. (IIRC, it screws up all subsequent tests, too, until the bogus directory is manually removed or chmod'd.)

          If the bug proves too elusive-and it's definitely intermittent and not at all frequent-a hackish workaround would be just to catch the IOException, attempt a chmod 700, and reattempt the deletion.

          Show
          Greg Roelofs added a comment - Race condition/lack of locking? FWIW, I've seen this exact same bug several times when running "ant test" on an NFS tree. (IIRC, it screws up all subsequent tests, too, until the bogus directory is manually removed or chmod'd.) If the bug proves too elusive- and it's definitely intermittent and not at all frequent -a hackish workaround would be just to catch the IOException, attempt a chmod 700, and reattempt the deletion.
          Hide
          Todd Lipcon added a comment -

          I guess it could be a test timing out right as a setPermissions is done, interrupting in the middle... but seems pretty unlikely, don't you think?

          I agree we could work around it for the tests, but I'm nervous whether we will see this issue crop up in production. Have you guys at Yahoo seen this on any clusters running secure YDH?

          Show
          Todd Lipcon added a comment - I guess it could be a test timing out right as a setPermissions is done, interrupting in the middle... but seems pretty unlikely, don't you think? I agree we could work around it for the tests, but I'm nervous whether we will see this issue crop up in production. Have you guys at Yahoo seen this on any clusters running secure YDH?
          Todd Lipcon made changes -
          Field Original Value New Value
          Link This issue is related to MAPREDUCE-1972 [ MAPREDUCE-1972 ]
          Hide
          Todd Lipcon added a comment -

          Linked MAPREDUCE-1972 which has some logs which might be related

          Show
          Todd Lipcon added a comment - Linked MAPREDUCE-1972 which has some logs which might be related
          Jeff Hammerbacher made changes -
          Link This issue is related to MAPREDUCE-842 [ MAPREDUCE-842 ]
          Hide
          Greg Roelofs added a comment -

          I guess it could be a test timing out right as a setPermissions is done, interrupting in the middle... but seems pretty unlikely, don't you think?

          Yes. I'm guessing it's more subtle than that and lies within the core MR code or the JVM. The fact that I see it semi-frequently on NFS (that is, more frequent than Hudson or production) suggests either timing (NFS is slow), perhaps via an erroneous assumption of synchronous behavior, or else an erroneous assumption of an infallible system call. It could be other things as well, of course, but those seem to me like the most probable candidates.

          I agree we could work around it for the tests, but I'm nervous whether we will see this issue crop up in production. Have you guys at Yahoo seen this on any clusters running secure YDH?

          To clarify, I was suggesting working around it in the MR code itself, not realizing that the Hudson backtrace wasn't using MR code at all. (Well, apparently.) So I'm not sure where that leaves us, other than trying to fix the actual set-permissions problem. Seems like no one's basic deleteRecursive() implementation includes an option to attempt a chmod() before failing on bad permissions?

          Anyway, yes, I think we've seen it in production with 0.20S or later, but it wasn't while I was on call, so I might be remembering a different issue with similar symptoms. Sorry...there are lots of interesting failure modes in Hadoop, and my memory is finite.

          Show
          Greg Roelofs added a comment - I guess it could be a test timing out right as a setPermissions is done, interrupting in the middle... but seems pretty unlikely, don't you think? Yes. I'm guessing it's more subtle than that and lies within the core MR code or the JVM. The fact that I see it semi-frequently on NFS (that is, more frequent than Hudson or production) suggests either timing (NFS is slow), perhaps via an erroneous assumption of synchronous behavior, or else an erroneous assumption of an infallible system call. It could be other things as well, of course, but those seem to me like the most probable candidates. I agree we could work around it for the tests, but I'm nervous whether we will see this issue crop up in production. Have you guys at Yahoo seen this on any clusters running secure YDH? To clarify, I was suggesting working around it in the MR code itself, not realizing that the Hudson backtrace wasn't using MR code at all. (Well, apparently.) So I'm not sure where that leaves us, other than trying to fix the actual set-permissions problem. Seems like no one's basic deleteRecursive() implementation includes an option to attempt a chmod() before failing on bad permissions? Anyway, yes, I think we've seen it in production with 0.20S or later, but it wasn't while I was on call, so I might be remembering a different issue with similar symptoms. Sorry...there are lots of interesting failure modes in Hadoop, and my memory is finite.
          Hide
          Todd Lipcon added a comment -

          I think we should do the following:

          • change out the implementation of setPermissions to not use the Java APIs, but rather the old style system("chmod") approach
          • implement a proper chmod call in libhadoop (JNI) to avoid the fork for production systems where the fork is too expensive

          Sorry...there are lots of interesting failure modes in Hadoop, and my memory is finite

          But that's why Hadoop's so much fun to work on! If it just worked all the time we'd be bored.

          Show
          Todd Lipcon added a comment - I think we should do the following: change out the implementation of setPermissions to not use the Java APIs, but rather the old style system("chmod") approach implement a proper chmod call in libhadoop (JNI) to avoid the fork for production systems where the fork is too expensive Sorry...there are lots of interesting failure modes in Hadoop, and my memory is finite But that's why Hadoop's so much fun to work on! If it just worked all the time we'd be bored.
          Hide
          Greg Roelofs added a comment -

          But that's why Hadoop's so much fun to work on! If it just worked all the time we'd be bored.

          We are entirely in agreement.

          Show
          Greg Roelofs added a comment - But that's why Hadoop's so much fun to work on! If it just worked all the time we'd be bored. We are entirely in agreement.
          Hide
          Todd Lipcon added a comment -

          Saw this again on a build here. This time the undeletable userlog directory was created by TestMiniMRWithDFSWithDistinctUsers.

          Show
          Todd Lipcon added a comment - Saw this again on a build here. This time the undeletable userlog directory was created by TestMiniMRWithDFSWithDistinctUsers.
          Hide
          Todd Lipcon added a comment -

          I don't know that this is the issue, but the new setPermissions code is definitely prone to races. If two threads tried to setPermissions on the same directory at once, it could definitely end up with an incorrect result.

          This patch makes setPermissions threadsafe at least against other invocations of the same method. Worth a shot to apply this and see if the problems go away?

          Show
          Todd Lipcon added a comment - I don't know that this is the issue, but the new setPermissions code is definitely prone to races. If two threads tried to setPermissions on the same directory at once, it could definitely end up with an incorrect result. This patch makes setPermissions threadsafe at least against other invocations of the same method. Worth a shot to apply this and see if the problems go away?
          Todd Lipcon made changes -
          Attachment mapreduce-2238.txt [ 12468307 ]
          Hide
          Todd Lipcon added a comment -

          Bummer, I had this patch in a branch on our internal Hudson and the problem happened anyway... so it doesn't look like it's an issue with thread safety on setPermissions

          Show
          Todd Lipcon added a comment - Bummer, I had this patch in a branch on our internal Hudson and the problem happened anyway... so it doesn't look like it's an issue with thread safety on setPermissions
          Hide
          Todd Lipcon added a comment -

          Another clue on this one:

          root@ubuntu64-build01:/home/hudson/production/workspace/CDH3-todd-security/build/test# stat logs/userlogs/job_20110116195813351_0001/attempt_20110116195813351_0001_m_000001_0/
          File: `logs/userlogs/job_20110116195813351_0001/attempt_20110116195813351_0001_m_000001_0/'
          Size: 4096 Blocks: 8 IO Block: 4096 directory
          Device: 801h/2049d Inode: 4376820 Links: 2
          Access: (0311/d-wx-x-x) Uid: ( 1065/ hudson) Gid: ( 1065/ hudson)
          Access: 2011-01-17 15:10:22.000000000 -0800
          Modify: 2011-01-16 19:58:49.000000000 -0800
          Change: 2011-01-16 19:58:49.000000000 -0800

          Note the change time. Now looking at what tests ran during that change time:

          root@ubuntu64-build01:/home/hudson/production/workspace/CDH3-todd-security/build/test# grep -l '2011-01-16 20:58:4' *
          TEST-org.apache.hadoop.mapred.TestTaskLogsTruncater.xml

          Seems like TestTaskLogsTruncater is probably the culprit. Also suspicious is that that same test failed a test run a few builds prior to this.

          Show
          Todd Lipcon added a comment - Another clue on this one: root@ubuntu64-build01:/home/hudson/production/workspace/CDH3-todd-security/build/test# stat logs/userlogs/job_20110116195813351_0001/attempt_20110116195813351_0001_m_000001_0/ File: `logs/userlogs/job_20110116195813351_0001/attempt_20110116195813351_0001_m_000001_0/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 4376820 Links: 2 Access: (0311/d-wx- x -x) Uid: ( 1065/ hudson) Gid: ( 1065/ hudson) Access: 2011-01-17 15:10:22.000000000 -0800 Modify: 2011-01-16 19:58:49.000000000 -0800 Change: 2011-01-16 19:58:49.000000000 -0800 Note the change time. Now looking at what tests ran during that change time: root@ubuntu64-build01:/home/hudson/production/workspace/CDH3-todd-security/build/test# grep -l '2011-01-16 20:58:4' * TEST-org.apache.hadoop.mapred.TestTaskLogsTruncater.xml Seems like TestTaskLogsTruncater is probably the culprit. Also suspicious is that that same test failed a test run a few builds prior to this.
          Hide
          Todd Lipcon added a comment -

          Oops scratch that, it's 19:58, not 20:58... so not TestTaskLogsTruncater.

          Interestingly, the tests running around that area are:
          TestJobStatusPersistency: Last log message at 19:58:09
          TestJobTrackerInstrumentation: First log msg at 19:59:03

          So we have a gap between those two.

          According to the test log, this test ran in between them:
          [junit] Running org.apache.hadoop.mapred.TestJobSysDirWithDFS
          [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 50.818 sec

          So I think that was the test that made the undeletable dir.

          Show
          Todd Lipcon added a comment - Oops scratch that, it's 19:58, not 20:58... so not TestTaskLogsTruncater. Interestingly, the tests running around that area are: TestJobStatusPersistency: Last log message at 19:58:09 TestJobTrackerInstrumentation: First log msg at 19:59:03 So we have a gap between those two. According to the test log, this test ran in between them: [junit] Running org.apache.hadoop.mapred.TestJobSysDirWithDFS [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 50.818 sec So I think that was the test that made the undeletable dir.
          Hide
          Todd Lipcon added a comment -

          Spent some time adding logging and looping the tests to figure out this problem. I think I have it cracked.

          The issue is not multiple threads calling setPermission() on the same process, but rather a case where one thread is calling setPermission on the parent directory of a file where another thread (actually another entire process) is calling setPermission.

          In particular, these two invocations race:

          2011-01-18 09:00:40,958 INFO tasktracker.Localizer (Localizer.java:setPermissions(129)) - Thread[TaskLauncher for MAP tasks,5,main]: About to set permissions on /data/1/todd/cdh/repos/cdh3/hadoop-0.20/build/test/logs/userlogs/job_20110118090037816_0001
          java.lang.Exception
          at org.apache.hadoop.mapreduce.server.tasktracker.Localizer$PermissionsHandler.setPermissions(Localizer.java:129)
          at org.apache.hadoop.mapreduce.server.tasktracker.Localizer.initializeJobLogDir(Localizer.java:429)
          at org.apache.hadoop.mapred.TaskTracker.initializeJobLogDir(TaskTracker.java:1072)
          at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:969)
          at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2209)
          2011-01-18 09:00:40,985 INFO tasktracker.Localizer (Localizer.java:setPermissions(129)) - Thread[Thread-213,5,main]: About to set permissions on /data/1/todd/cdh/repos/cdh3/hadoop-0.20/build/test/logs/userlogs/job_20110118090037816_0001/attempt_20110118090037816_0001_m_000005_0
          java.lang.Exception
          at org.apache.hadoop.mapreduce.server.tasktracker.Localizer$PermissionsHandler.setPermissions(Localizer.java:129)
          at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:285)
          at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:198)

          The above traces are from an 0.20 branch but I imagine it's the same deal on trunk.

          The issue is that the top invocation flips the job_<id> directory to 000 momentarily. During that time, the stat/chmod calls for the attempt directory fail with EACCES, which can leave the attempt directory with the wrong permissions. I have strace output which shows this as well.

          I think we should do away with this Java API nonsense altogether, link in a normal chmod call, and use fork by default when native isn't available.

          Show
          Todd Lipcon added a comment - Spent some time adding logging and looping the tests to figure out this problem. I think I have it cracked. The issue is not multiple threads calling setPermission() on the same process, but rather a case where one thread is calling setPermission on the parent directory of a file where another thread (actually another entire process) is calling setPermission. In particular, these two invocations race: 2011-01-18 09:00:40,958 INFO tasktracker.Localizer (Localizer.java:setPermissions(129)) - Thread [TaskLauncher for MAP tasks,5,main] : About to set permissions on /data/1/todd/cdh/repos/cdh3/hadoop-0.20/build/test/logs/userlogs/job_20110118090037816_0001 java.lang.Exception at org.apache.hadoop.mapreduce.server.tasktracker.Localizer$PermissionsHandler.setPermissions(Localizer.java:129) at org.apache.hadoop.mapreduce.server.tasktracker.Localizer.initializeJobLogDir(Localizer.java:429) at org.apache.hadoop.mapred.TaskTracker.initializeJobLogDir(TaskTracker.java:1072) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:969) at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2209) 2011-01-18 09:00:40,985 INFO tasktracker.Localizer (Localizer.java:setPermissions(129)) - Thread [Thread-213,5,main] : About to set permissions on /data/1/todd/cdh/repos/cdh3/hadoop-0.20/build/test/logs/userlogs/job_20110118090037816_0001/attempt_20110118090037816_0001_m_000005_0 java.lang.Exception at org.apache.hadoop.mapreduce.server.tasktracker.Localizer$PermissionsHandler.setPermissions(Localizer.java:129) at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:285) at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:198) The above traces are from an 0.20 branch but I imagine it's the same deal on trunk. The issue is that the top invocation flips the job_<id> directory to 000 momentarily. During that time, the stat/chmod calls for the attempt directory fail with EACCES, which can leave the attempt directory with the wrong permissions. I have strace output which shows this as well. I think we should do away with this Java API nonsense altogether, link in a normal chmod call, and use fork by default when native isn't available.
          Todd Lipcon made changes -
          Link This issue is blocked by HADOOP-7110 [ HADOOP-7110 ]
          Todd Lipcon made changes -
          Assignee Todd Lipcon [ tlipcon ]
          Hide
          Todd Lipcon added a comment -

          Here's a patch which gets rid of the racy PermissionsHandler code and replaces it with calls to LocalFileSystem.setPermission. When combined with HADOOP-7110 this will actually be more efficient and also avoid the bug described in this JIRA.

          Show
          Todd Lipcon added a comment - Here's a patch which gets rid of the racy PermissionsHandler code and replaces it with calls to LocalFileSystem.setPermission. When combined with HADOOP-7110 this will actually be more efficient and also avoid the bug described in this JIRA.
          Todd Lipcon made changes -
          Attachment mapreduce-2238.txt [ 12468671 ]
          Hide
          Todd Lipcon added a comment -

          Changing to critical for 0.22, since the bug is now understood, and it ends up killing Hudson on a regular basis

          Show
          Todd Lipcon added a comment - Changing to critical for 0.22, since the bug is now understood, and it ends up killing Hudson on a regular basis
          Todd Lipcon made changes -
          Fix Version/s 0.22.0 [ 12314184 ]
          Affects Version/s 0.22.0 [ 12314184 ]
          Affects Version/s 0.23.0 [ 12315570 ]
          Priority Major [ 3 ] Critical [ 2 ]
          Hide
          Eli Collins added a comment -

          +1 latest patch looks good.

          Show
          Eli Collins added a comment - +1 latest patch looks good.
          Hide
          Todd Lipcon added a comment -

          Previous patch had a slight change where the Job ACL file was 600 instead of 700. Not clear why it should be 700 (it's not executable!) but it shouldn't be fixed as part of this JIRA (caused some localization tests to fail).

          Will resubmit this new patch for tests and open a new JIRA to fix permissions there to make more sense.

          Show
          Todd Lipcon added a comment - Previous patch had a slight change where the Job ACL file was 600 instead of 700. Not clear why it should be 700 (it's not executable!) but it shouldn't be fixed as part of this JIRA (caused some localization tests to fail). Will resubmit this new patch for tests and open a new JIRA to fix permissions there to make more sense.
          Todd Lipcon made changes -
          Attachment mapreduce-2238.txt [ 12468729 ]
          Todd Lipcon made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Todd Lipcon added a comment -

          Unit tests pass except known failures

          Show
          Todd Lipcon added a comment - Unit tests pass except known failures
          Hide
          Todd Lipcon added a comment -

          Committed to trunk and branch, thanks for review Eli, and thanks to Greg for helping brainstorm.

          Show
          Todd Lipcon added a comment - Committed to trunk and branch, thanks for review Eli, and thanks to Greg for helping brainstorm.
          Todd Lipcon made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Hadoop Flags [Reviewed]
          Resolution Fixed [ 1 ]
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #583 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/583/)
          MAPREDUCE-2238. Fix permissions handling to avoid leaving undeletable directories in local dirs. Contributed by Todd Lipcon

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #583 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/583/ ) MAPREDUCE-2238 . Fix permissions handling to avoid leaving undeletable directories in local dirs. Contributed by Todd Lipcon
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-22-branch #33 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/33/)

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-22-branch #33 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/33/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #643 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/643/)

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #643 (See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/643/ )
          Hide
          Kihwal Lee added a comment -

          We've seen this in 0.20-security builds. May be we should put HADOOP-7110 and this to 0.20-security. I see they are in CDH3.

          Show
          Kihwal Lee added a comment - We've seen this in 0.20-security builds. May be we should put HADOOP-7110 and this to 0.20-security. I see they are in CDH3.
          Konstantin Shvachko made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Open Open Patch Available Patch Available
          15d 7h 24m 1 Todd Lipcon 19/Jan/11 06:21
          Patch Available Patch Available Resolved Resolved
          5d 18h 49m 1 Todd Lipcon 25/Jan/11 01:11
          Resolved Resolved Closed Closed
          321d 5h 8m 1 Konstantin Shvachko 12/Dec/11 06:19

            People

            • Assignee:
              Todd Lipcon
              Reporter:
              Eli Collins
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development