Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
Incompatible change, Reviewed
-
Description
I've seen several critical production issues because logs are not automatically removed after some time and accumulate. Changes to Hadoop's default log4j file appender would help with this.
I recommend we move to an appender which:
1) caps the max file size (configurable)
2) caps the max number of files to keep (configurable)
3) uses rolling file appender rather than DRFA, see the warning here:
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
Specifically: "DailyRollingFileAppender has been observed to exhibit synchronization issues and data loss."
We'd lose (based on the default log4j configuration) the daily rolling aspect, however increase reliability.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-8216 address log4j.properties inconsistencies btw main and template dirs
- Resolved
-
HBASE-5655 Cap space usage of default log4j rolling policy
- Closed
- is required by
-
HDFS-3066 cap space usage of default log4j rolling policy (hdfs specific changes)
- Resolved
-
MAPREDUCE-3989 cap space usage of default log4j rolling policy (mr specific changes)
- Closed
- relates to
-
ZOOKEEPER-1435 cap space usage of default log4j rolling policy
- Resolved
-
HADOOP-8224 Don't hardcode hdfs.audit.logger in the scripts
- Closed
-
FLUME-1073 Default Log4j configuration file should have a rolling policy
- Resolved