Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-2243

Close all the file streams propely in a finally block to avoid their leakage.

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.22.0, 0.23.0
    • Fix Version/s: 0.23.0
    • Component/s: jobtracker, tasktracker
    • Labels:
      None
    • Environment:

      NA

    • Hadoop Flags:
      Reviewed

      Description

      In the following classes streams should be closed in finally block to avoid their leakage in the exceptional cases.

      CompletedJobStatusStore.java
      ------------------------------------------
      dataOut.writeInt(events.length);
      for (TaskCompletionEvent event : events)

      { event.write(dataOut); }

      dataOut.close() ;

      EventWriter.java
      ----------------------
      encoder.flush();
      out.close();

      MapTask.java
      -------------------
      splitMetaInfo.write(out);
      out.close();

      TaskLog
      ------------
      1) str = fis.readLine();
      fis.close();

      2) dos.writeBytes(Long.toString(new File(logLocation, LogName.SYSLOG
      .toString()).length() - prevLogLength) + "\n");
      dos.close();

      TotalOrderPartitioner.java
      -----------------------------------
      while (reader.next(key, value))

      { parts.add(key); key = ReflectionUtils.newInstance(keyClass, conf); }

      reader.close();

      1. MAPREDUCE-2243-4.patch
        10 kB
        Devaraj K
      2. MAPREDUCE-2243-3.patch
        10 kB
        Devaraj K
      3. MAPREDUCE-2243-2.patch
        9 kB
        Devaraj K
      4. MAPREDUCE-2243-1.patch
        10 kB
        Devaraj K
      5. MAPREDUCE-2243.patch
        10 kB
        Devaraj K

        Activity

        Hide
        Owen O'Malley added a comment -

        Finally blocks have some very bad properties for exceptions. In particular, they tend to mask errors.

        bad example
        try {
          f = fs.open(...);
          f.write(...);
        } finally {
          f.close();
        }
        

        because if an exception is thrown in the close it will mask exceptions thrown in main body.

        To make the problem concrete, if you don't have permission to write the file, it will throw an IOException saying so. But then the finally block will get a NullPointerException on the close and the user will get that exception without the original exception.

        The preferred style is to do:

        good example
        try {
          f = fs.open(...);
          f.write(...);
          f.close();
        } catch (IOException ioe) {
          IOUtils.cleanup(LOG, f);
          throw ioe;
        }
        
        Show
        Owen O'Malley added a comment - Finally blocks have some very bad properties for exceptions. In particular, they tend to mask errors. bad example try { f = fs.open(...); f.write(...); } finally { f.close(); } because if an exception is thrown in the close it will mask exceptions thrown in main body. To make the problem concrete, if you don't have permission to write the file, it will throw an IOException saying so. But then the finally block will get a NullPointerException on the close and the user will get that exception without the original exception. The preferred style is to do: good example try { f = fs.open(...); f.write(...); f.close(); } catch (IOException ioe) { IOUtils.cleanup(LOG, f); throw ioe; }
        Hide
        Bhallamudi Venkata Siva Kamesh added a comment -

        Hi Owen,
        The above example given is great and describes when to close the streams clearly. But there is chance of missing the streams closure as given below.

        If try block throws any exception other than IOException, then the chances of stream not getting closed will be there.

        Given style of closing streams
        try {
          f = fs.open(...);
          f.write(...); 
          // If any exception other than IOException thrown from here will lead to missing the stream close
          f.close();
        } catch (IOException ioe) {
          IOUtils.cleanup(LOG, f);
          throw ioe;
        }
        

        So consider the following approach.

        Proposed style of closing streams
        	try	{
        		f = fs.open(....);
        		f.write(....);
        		f.close();
        		f = null; // set to null explicitly so that close in finally will not execute again and also just to avoid double close issue, if any.
        	}
        	catch(IOException ioe) {
        		throw ioe;
        	}
        	finally {
        		IOUtils.cleanup(LOG, f); // This will do nothing if the try block completed successfully. 
        	}
        

        But I feel we need to check the feasibility of the getting other kinds of exception in try block except the ones which we are catching!.
        If all kinds of exceptions have been handled then I feel no need of final block.

        Show
        Bhallamudi Venkata Siva Kamesh added a comment - Hi Owen, The above example given is great and describes when to close the streams clearly. But there is chance of missing the streams closure as given below. If try block throws any exception other than IOException, then the chances of stream not getting closed will be there. Given style of closing streams try { f = fs.open(...); f.write(...); // If any exception other than IOException thrown from here will lead to missing the stream close f.close(); } catch (IOException ioe) { IOUtils.cleanup(LOG, f); throw ioe; } So consider the following approach. Proposed style of closing streams try { f = fs.open(....); f.write(....); f.close(); f = null ; // set to null explicitly so that close in finally will not execute again and also just to avoid double close issue, if any. } catch (IOException ioe) { throw ioe; } finally { IOUtils.cleanup(LOG, f); // This will do nothing if the try block completed successfully. } But I feel we need to check the feasibility of the getting other kinds of exception in try block except the ones which we are catching!. If all kinds of exceptions have been handled then I feel no need of final block.
        Hide
        Owen O'Malley added a comment -

        In most cases, the exceptions outside of IOException don't matter much because they will bring down
        the service. In cases where the system will stay up, I'd suggest using:

        try {
          f = fs.open(...);
          f.write(...);
          f.close(...);
        } catch (IOException ie) {
          IOUtils.cleanup(LOG, f);
          throw ie;
        } catch (RuntimeException re) {
          IOUtils.cleanup(LOG, f);
          throw re;
        } catch (Error err) {
          IOUtils.cleanup(LOG, f);
          throw err;
        }
        

        this leaves the nominal case simple. Note that this is the worst case, if we get an Error every system in Hadoop should shutdown.
        There is no point in continuing and worrying about lost file handles at that point is too extreme. Also note that with Java's
        garbage collector, this is far less critical, even for a server, than in C.

        Show
        Owen O'Malley added a comment - In most cases, the exceptions outside of IOException don't matter much because they will bring down the service. In cases where the system will stay up, I'd suggest using: try { f = fs.open(...); f.write(...); f.close(...); } catch (IOException ie) { IOUtils.cleanup(LOG, f); throw ie; } catch (RuntimeException re) { IOUtils.cleanup(LOG, f); throw re; } catch (Error err) { IOUtils.cleanup(LOG, f); throw err; } this leaves the nominal case simple. Note that this is the worst case, if we get an Error every system in Hadoop should shutdown. There is no point in continuing and worrying about lost file handles at that point is too extreme. Also note that with Java's garbage collector, this is far less critical, even for a server, than in C.
        Hide
        Laxman added a comment -

        @Owen

        In most cases, the exceptions outside of IOException don't matter much because they will bring down.

        this leaves the nominal case simple. Note that this is the worst case, if we get an Error every system in Hadoop should shutdown.

        There is no point in continuing and worrying about lost file handles at that point is too extreme.

        Yes, I agree to your point in Error scenarios. How about some runtime exception which need not be handled in the positive flow?

        Handling unexpected generic exceptions and errors will result in catch and rethrow pattern. So, I prefer to handle the stream closure in try block as well as in finally block.

        As per your initial comments Kamesh has corrected to close the streams in try block as well as in finally block.
        Do you still see some issue with this approach?
        How handling stream close in catch block is better than handling in try and finally blocks?

        My opinion on this issue is "Handling stream closures in try and finally block is fool proof and it will avoid some code duplication."

        Show
        Laxman added a comment - @Owen In most cases, the exceptions outside of IOException don't matter much because they will bring down. this leaves the nominal case simple. Note that this is the worst case, if we get an Error every system in Hadoop should shutdown. There is no point in continuing and worrying about lost file handles at that point is too extreme. Yes, I agree to your point in Error scenarios. How about some runtime exception which need not be handled in the positive flow? Handling unexpected generic exceptions and errors will result in catch and rethrow pattern. So, I prefer to handle the stream closure in try block as well as in finally block. As per your initial comments Kamesh has corrected to close the streams in try block as well as in finally block. Do you still see some issue with this approach? How handling stream close in catch block is better than handling in try and finally blocks? My opinion on this issue is "Handling stream closures in try and finally block is fool proof and it will avoid some code duplication."
        Hide
        Devaraj K added a comment -

        Provided patch for trunk as per the above comments.

        Show
        Devaraj K added a comment - Provided patch for trunk as per the above comments.
        Hide
        Eli Collins added a comment -

        fyi HADOOP-7428 is a case where the RTE is relevant.

        Show
        Eli Collins added a comment - fyi HADOOP-7428 is a case where the RTE is relevant.
        Hide
        Todd Lipcon added a comment -

        this is useless code, shows up 5x:
        + } catch (IOException ioe) {
        + throw ioe;

        otherwise seems reasonable

        Show
        Todd Lipcon added a comment - this is useless code, shows up 5x: + } catch (IOException ioe) { + throw ioe; otherwise seems reasonable
        Hide
        Devaraj K added a comment -

        Thanks Todd for reviewing. Update the patch with review comments fix.

        Show
        Devaraj K added a comment - Thanks Todd for reviewing. Update the patch with review comments fix.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12485733/MAPREDUCE-2243-1.patch
        against trunk revision 1144097.

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these core unit tests:
        org.apache.hadoop.cli.TestMRCLI
        org.apache.hadoop.fs.TestFileSystem
        org.apache.hadoop.mapred.TestIsolationRunner
        org.apache.hadoop.mapred.TestMiniMRWithDFS
        org.apache.hadoop.mapred.TestSeveral
        org.apache.hadoop.security.authorize.TestServiceLevelAuthorization

        -1 contrib tests. The patch failed contrib unit tests.

        +1 system test framework. The patch passed system test framework compile.

        Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12485733/MAPREDUCE-2243-1.patch against trunk revision 1144097. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.cli.TestMRCLI org.apache.hadoop.fs.TestFileSystem org.apache.hadoop.mapred.TestIsolationRunner org.apache.hadoop.mapred.TestMiniMRWithDFS org.apache.hadoop.mapred.TestSeveral org.apache.hadoop.security.authorize.TestServiceLevelAuthorization -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/447//console This message is automatically generated.
        Hide
        Devaraj K added a comment -

        Updated the patch based on the trunk latest changes.

        Show
        Devaraj K added a comment - Updated the patch based on the trunk latest changes.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12486739/MAPREDUCE-2243-2.patch
        against trunk revision 1146517.

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these core unit tests:
        org.apache.hadoop.cli.TestMRCLI
        org.apache.hadoop.fs.TestFileSystem

        -1 contrib tests. The patch failed contrib unit tests.

        +1 system test framework. The patch passed system test framework compile.

        Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12486739/MAPREDUCE-2243-2.patch against trunk revision 1146517. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.cli.TestMRCLI org.apache.hadoop.fs.TestFileSystem -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/474//console This message is automatically generated.
        Hide
        Devaraj K added a comment -

        The below lines of code is showing as the cause for the findbug issue which is already present and not added by the patch.

        TaskLog.java
            if (localFS == null) {// set localFS once
              localFS = FileSystem.getLocal(new Configuration());
            }
        

        These changes are related to streams closure, verified manually, test cases are not needed for this.

        Show
        Devaraj K added a comment - The below lines of code is showing as the cause for the findbug issue which is already present and not added by the patch. TaskLog.java if (localFS == null ) { // set localFS once localFS = FileSystem.getLocal( new Configuration()); } These changes are related to streams closure, verified manually, test cases are not needed for this.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        In TaskLog.getLogFileDetail(..), should we use a single try-finally for everything?

        Show
        Tsz Wo Nicholas Sze added a comment - In TaskLog.getLogFileDetail(..), should we use a single try-finally for everything?
        Hide
        Devaraj K added a comment -

        Thanks Nicholas for reviewing.
        It is not required to use single try for everything because the block of code (given below) which comes in between these two try blocks is not required to keep in the try block.

        if (str == null) { //the file doesn't have anything
              throw new IOException ("Index file for the log of " + taskid+" doesn't exist.");
            }
            l.location = str.substring(str.indexOf(LogFileDetail.LOCATION)+
                LogFileDetail.LOCATION.length());
            //special cases are the debugout and profile.out files. They are guaranteed
            //to be associated with each task attempt since jvm reuse is disabled
            //when profiling/debugging is enabled
            if (filter.equals(LogName.DEBUGOUT) || filter.equals(LogName.PROFILE)) {
              l.length = new File(l.location, filter.toString()).length();
              l.start = 0;
              fis.close();
              return l;
            }
        

        Here it has only fis.close(); which deals with streams. If we put fis.close() in the try finally, it will try to close it in the try block, if suppose it throws IOException, then again it will try to close in the finally and throws the exception again. It doesn’t give any difference if we keep in try or don’t keep inside try block.

        Show
        Devaraj K added a comment - Thanks Nicholas for reviewing. It is not required to use single try for everything because the block of code (given below) which comes in between these two try blocks is not required to keep in the try block. if (str == null) { //the file doesn't have anything throw new IOException ( "Index file for the log of " + taskid+ " doesn't exist." ); } l.location = str.substring(str.indexOf(LogFileDetail.LOCATION)+ LogFileDetail.LOCATION.length()); //special cases are the debugout and profile.out files. They are guaranteed //to be associated with each task attempt since jvm reuse is disabled //when profiling/debugging is enabled if (filter.equals(LogName.DEBUGOUT) || filter.equals(LogName.PROFILE)) { l.length = new File(l.location, filter.toString()).length(); l.start = 0; fis.close(); return l; } Here it has only fis.close(); which deals with streams. If we put fis.close() in the try finally, it will try to close it in the try block, if suppose it throws IOException, then again it will try to close in the finally and throws the exception again. It doesn’t give any difference if we keep in try or don’t keep inside try block.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Hi Devaraj, is it the case the if str == null, then fis won't be closed? Similarly, if str.indexOf(LogFileDetail.LOCATION)+LogFileDetail.LOCATION.length() is out of range, then substring(..) will throw exception and fis won't be closed.

        Show
        Tsz Wo Nicholas Sze added a comment - Hi Devaraj, is it the case the if str == null, then fis won't be closed? Similarly, if str.indexOf(LogFileDetail.LOCATION)+LogFileDetail.LOCATION.length() is out of range, then substring(..) will throw exception and fis won't be closed.
        Hide
        Devaraj K added a comment -

        Yes Nicholas, Good catch. I missed it.
        I will update the patch with single try, that will handle the above cases also.

        Show
        Devaraj K added a comment - Yes Nicholas, Good catch. I missed it. I will update the patch with single try, that will handle the above cases also.
        Hide
        Devaraj K added a comment -

        Updated the patch as per the above comments.

        Show
        Devaraj K added a comment - Updated the patch as per the above comments.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        All hudson machines are down. Could you run "ant test" and "ant test-patch" manually?

        Show
        Tsz Wo Nicholas Sze added a comment - All hudson machines are down. Could you run "ant test" and "ant test-patch" manually?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 patch looks good.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
        Hide
        Devaraj K added a comment -

        Please find the "ant test-patch" result from my system.

         
        
           [exec] 
             [exec] -1 overall.  
             [exec] 
             [exec]     +1 @author.  The patch does not contain any @author tags.
             [exec] 
             [exec]     -1 tests included.  The patch doesn't appear to include any new or modified tests.
             [exec]                         Please justify why no new tests are needed for this patch.
             [exec]                         Also please list what manual steps were performed to verify this patch.
             [exec] 
             [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
             [exec] 
             [exec]     +1 javac. The applied patch does not increase the total number of javac compiler warnings.
             [exec] 
             [exec]     +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
             [exec] 
             [exec]     +1 release audit.  The applied patch does not increase the total number of release audit warnings.
             [exec] 
             [exec]     +1 system test framework. The patch passed system test framework compile.
             [exec] 
             [exec] 
        
        Show
        Devaraj K added a comment - Please find the "ant test-patch" result from my system. [exec] [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] -1 tests included. The patch doesn't appear to include any new or modified tests. [exec] Please justify why no new tests are needed for this patch. [exec] Also please list what manual steps were performed to verify this patch. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 system test framework. The patch passed system test framework compile. [exec] [exec]
        Hide
        Devaraj K added a comment -

        I have verified the tests manually by running the "ant test".

        These changes are related to streams closure, verified manually, new tests are not needed for this.

        Show
        Devaraj K added a comment - I have verified the tests manually by running the "ant test". These changes are related to streams closure, verified manually, new tests are not needed for this.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Hi Devaraj, I was trying to commit the patch but there was an additional encoder.flush() in EventWriter.close(). Could you fix it?

        Show
        Tsz Wo Nicholas Sze added a comment - Hi Devaraj, I was trying to commit the patch but there was an additional encoder.flush() in EventWriter.close() . Could you fix it?
        Hide
        Devaraj K added a comment -

        I fixed and updated the patch. Thanks Nicholas.

        Show
        Devaraj K added a comment - I fixed and updated the patch. Thanks Nicholas.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 patch looks good

        I have committed this. Thanks, Devaraj!

        Show
        Tsz Wo Nicholas Sze added a comment - +1 patch looks good I have committed this. Thanks, Devaraj!
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #760 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/760/)
        MAPREDUCE-2243. Close streams propely in a finally-block to avoid leakage in CompletedJobStatusStore, TaskLog, EventWriter and TotalOrderPartitioner. Contributed by Devaraj K

        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1152787
        Files :

        • /hadoop/common/trunk/mapreduce/CHANGES.txt
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/CompletedJobStatusStore.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #760 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/760/ ) MAPREDUCE-2243 . Close streams propely in a finally-block to avoid leakage in CompletedJobStatusStore, TaskLog, EventWriter and TotalOrderPartitioner. Contributed by Devaraj K szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1152787 Files : /hadoop/common/trunk/mapreduce/CHANGES.txt /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/CompletedJobStatusStore.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #751 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/751/)
        MAPREDUCE-2243. Close streams propely in a finally-block to avoid leakage in CompletedJobStatusStore, TaskLog, EventWriter and TotalOrderPartitioner. Contributed by Devaraj K

        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1152787
        Files :

        • /hadoop/common/trunk/mapreduce/CHANGES.txt
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/CompletedJobStatusStore.java
        • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #751 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/751/ ) MAPREDUCE-2243 . Close streams propely in a finally-block to avoid leakage in CompletedJobStatusStore, TaskLog, EventWriter and TotalOrderPartitioner. Contributed by Devaraj K szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1152787 Files : /hadoop/common/trunk/mapreduce/CHANGES.txt /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/CompletedJobStatusStore.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java

          People

          • Assignee:
            Devaraj K
            Reporter:
            Bhallamudi Venkata Siva Kamesh
          • Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Time Tracking

              Estimated:
              Original Estimate - 72h
              72h
              Remaining:
              Remaining Estimate - 72h
              72h
              Logged:
              Time Spent - Not Specified
              Not Specified

                Development