Bug 41735 - RollingFileAppender may delete files during rollover
RollingFileAppender may delete files during rollover
Status: RESOLVED FIXED
Product: Log4j
Classification: Unclassified
Component: Appender
1.2
Other other
: P2 normal
: ---
Assigned To: log4j-dev
:
Depends on:
Blocks:
  Show dependency tree
 
Reported: 2007-02-28 12:39 UTC by Ian Reilly
Modified: 2007-10-29 11:26 UTC (History)
0 users



Attachments
patch (2.39 KB, patch)
2007-02-28 12:39 UTC, Ian Reilly
Details | Diff
Minimal patch for rename failure (2.15 KB, patch)
2007-02-28 16:35 UTC, Curt Arnold
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Ian Reilly 2007-02-28 12:39:10 UTC
The RollingFileAppender tries to rename the old files during roll over. 
However, it does not check if the rename succeeded and as a result, it may end
up deleting information.  In particular, on Windows, the file rename can fail if
another process has the file open.

This problem was found in 1.2.14 and seems to still exist in 1.2.15 rc1.

The following changes address the issue.
120a121,122
>     boolean failedRename = false;
> 
127,131c129,139
<     if(maxBackupIndex > 0) {
<       // Delete the oldest file, to keep Windows happy.
<       file = new File(fileName + '.' + maxBackupIndex);
<       if (file.exists())
<        file.delete();
---
>     // if the .1 file doesn't exist then we don't need to rename either
>     if ((maxBackupIndex > 0)) {
>       target = new File(fileName + "." + 1);
>       if (target.exists()) {
> 	 // Delete the oldest file, to keep Windows happy.
> 	 file = new File(fileName + '.' + maxBackupIndex);
> 	 if (file.exists()) {
> 	   if (! file.delete()) {
> 	      failedRename = true;
> 	   }
> 	 }
133,140c141,154
<       // Map {(maxBackupIndex - 1), ..., 2, 1} to {maxBackupIndex, ..., 3, 2}
<       for (int i = maxBackupIndex - 1; i >= 1; i--) {
< 	file = new File(fileName + "." + i);
< 	if (file.exists()) {
< 	  target = new File(fileName + '.' + (i + 1));
< 	  LogLog.debug("Renaming file " + file + " to " + target);
< 	  file.renameTo(target);
< 	}
---
> 	 // Map {(maxBackupIndex - 1), ..., 2, 1} to {maxBackupIndex, ..., 3, 2}
> 	 if (! failedRename) {
> 	    for (int i = maxBackupIndex - 1; i >= 1; i--) {
> 	      file = new File(fileName + "." + i);
> 	      if (file.exists()) {
> 		target = new File(fileName + '.' + (i + 1));
> 		LogLog.debug("Renaming file " + file + " to " + target);
> 		if (! file.renameTo(target)) {
> 		   failedRename = true;
> 		   break;
> 		}
> 	      }
Comment 1 Ian Reilly 2007-02-28 12:39:57 UTC
Created attachment 19651 [details]
patch
Comment 2 Curt Arnold 2007-02-28 16:35:13 UTC
Looks like a pretty good fix to get in before 1.2.15 comes out.  I've got a
cleaner version of the patch (I think it is equivalent) which I'll attach. 
Definitely need to add some test cases around it before committing it and
creating a second release candidate.
Comment 3 Curt Arnold 2007-02-28 16:35:53 UTC
Created attachment 19653 [details]
Minimal patch for rename failure
Comment 4 Curt Arnold 2007-02-28 16:50:27 UTC
I don't have time to research this at the moment, but need make sure that the patch does not cause a 
rollover to be attempted on every subsequent log request after a rename fails.  My recollection is that RFA 
would be suseptible to that and locking the .1 file would in very quick order bring performance to its 
knees and delete all the older files.  If the existing log file can't be renamed, then the next rollover should 
be attempted at about the time the next rollover attempt would happen.
Comment 5 Curt Arnold 2007-02-28 22:30:40 UTC
On 2nd or 3rd thought, it is likely too significant to try to rush into a release at the last moment.  I'm still 
going to try to get some tests written and get something in the SVN for a longer period of testing before a 
1.2.16.  This is a long-standing issue with RFA, not a regression, so it isn't a critical time-sensitive issue. 
Comment 6 Curt Arnold 2007-03-01 23:07:30 UTC
Committed fix and test cases in rev 513638.

Original patch would result in attempted rollover on every subsequent log request after a failure which 
would result in log files above the locked file quickly being discarded.  The eventual patch will only retry 
rollover after the log file grows another maxFileSize.

The test cases keep a .1 and .3 file open respectively and then have some attempted rollovers that will fail 
on OS's that do not rename open files and then a successful rollover after the files are closed.
Comment 7 Prabhat Jain 2007-07-10 19:16:27 UTC
If the file rolling over fails all the logging is done in the same file and as 
a result if the file is being locked for long time all the history logs keep 
on getting re-written. So an eventual release of this patch in the newer 
version is very important.
Comment 8 Curt Arnold 2007-10-29 11:26:05 UTC
Fix committed in rev 513638 was published in log4j 1.2.15.  Not sure if Prabhat Jain comment is 
indicating some other issue or just urging a release.  If there are remaining issues with the log4j 1.2.15 
release, a new bug report should be filed as this one is marked as complete in log4j 1.2.15.