Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
0.23.10, 2.4.0
-
None
-
None
Description
Multiple issues were encountered when AppLogAggregatorImpl encountered an IOException in AppLogAggregatorImpl#uploadLogsForContainer while aggregating yarn-logs for an application that had very large (>150G each) error logs.
- An IOException was encountered during the LogWriter#append call, and a message was printed, but no stacktrace was provided. Message: "ERROR: Couldn't upload logs for container_nnnnnnnnnnnnn_nnnnnnn_nn_nnnnnn. Skipping this container."
- After the IOExceptin, the TFile is in a bad state, so subsequent calls to LogWriter#append fail with the following stacktrace:
2014-04-16 13:29:09,772 LogAggregationService #17907 ERROR org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread ThreadLogAggregationService #17907,5,main threw an Exception.
java.lang.IllegalStateException: Incorrect state to start a new key: IN_VALUE
at org.apache.hadoop.io.file.tfile.TFile$Writer.prepareAppendKey(TFile.java:528)
at org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.append(AggregatedLogFormat.java:262)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainer(AppLogAggregatorImpl.java:128)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:164)
... - At this point, the yarn-logs cleaner still thinks the thread is aggregating, so the huge yarn-logs never get cleaned up for that application.