commit 02920d20639d5aadbabeb09a31b5542fb80cd72b Author: Andrew Sherman Date: Wed Nov 29 11:36:18 2017 -0800 HIVE-17870: Provide a replacement for NoDeleteRollingFileAppender In log4j2 functionality that was previously the responsibility of the Appender is now split into multiple interfaces (RolloverStrategy, AbstratcMananager, Appender). To replace NoDeleteRollingFileAppender we provide a RolloverStrategy called SlidingFilenameRolloverStrategy which does the following: (1) creates log file names by appending a millisecond timestamp to a template (2) when log file rollover occurs the old files are not renamed, instead logging switches to a file with a new timestamp. Change log4j2.version to be 2.8.2 as LOG4J2-907 was needed to make the change work (The log4j2 version is is now the same as standalone-metastore, this is not a requirement, but nice to have). Delete the obsolete log4j based NoDeleteRollingFileAppender diff --git pom.xml pom.xml index 1682f47d32b4cec4b3544df37d6fc9622007bb43..6d8ab5e3cb5c010fe83af79e7442e87d117d43f3 100644 --- pom.xml +++ pom.xml @@ -180,7 +180,7 @@ 3.0.3 0.9.3 0.9.3 - 2.6.2 + 2.8.2 2.3 1.4.1 1.10.19 diff --git ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java index 639d1d8eab6068b99b5681e32a717780d5bb0407..0ff66df441301d09efa00ecd414d85281a302cc9 100644 --- ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java +++ ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java @@ -176,8 +176,7 @@ public static HushableRandomAccessFileAppender createAppender( layout = PatternLayout.createDefaultLayout(); } final RandomAccessFileManager manager = RandomAccessFileManager.getFileManager( - fileName, isAppend, isFlush, bufferSize, advertiseURI, layout - // , config -- needed in later log4j versions + fileName, isAppend, isFlush, bufferSize, advertiseURI, layout, config ); if (manager == null) { return null; diff --git ql/src/java/org/apache/hadoop/hive/ql/log/NoDeleteRollingFileAppender.java ql/src/java/org/apache/hadoop/hive/ql/log/NoDeleteRollingFileAppender.java deleted file mode 100644 index be32f06e43a821e2e06dd0b26ec34e7f517d4ceb..0000000000000000000000000000000000000000 --- ql/src/java/org/apache/hadoop/hive/ql/log/NoDeleteRollingFileAppender.java +++ /dev/null @@ -1,176 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hive.ql.log; - -import java.io.File; -import java.io.IOException; -import java.io.InterruptedIOException; -import java.io.Writer; - -import org.apache.log4j.FileAppender; -import org.apache.log4j.Layout; -import org.apache.log4j.helpers.CountingQuietWriter; -import org.apache.log4j.helpers.LogLog; -import org.apache.log4j.helpers.OptionConverter; -import org.apache.log4j.spi.LoggingEvent; - -public class NoDeleteRollingFileAppender extends FileAppender { - /** - * The default maximum file size is 10MB. - */ - protected long maxFileSize = 10 * 1024 * 1024; - - private long nextRollover = 0; - - /** - * The default constructor simply calls its {@link FileAppender#FileAppender - * parents constructor}. - */ - public NoDeleteRollingFileAppender() { - } - - /** - * Instantiate a RollingFileAppender and open the file designated by - * filename. The opened filename will become the output - * destination for this appender. - *

- * If the append parameter is true, the file will be appended to. - * Otherwise, the file designated by filename will be truncated - * before being opened. - */ - public NoDeleteRollingFileAppender(Layout layout, String filename, - boolean append) throws IOException { - super(layout, filename, append); - } - - /** - * Instantiate a FileAppender and open the file designated by - * filename. The opened filename will become the output - * destination for this appender. - *

- * The file will be appended to. - */ - public NoDeleteRollingFileAppender(Layout layout, String filename) - throws IOException { - super(layout, filename); - } - - /** - * Get the maximum size that the output file is allowed to reach before being - * rolled over to backup files. - */ - public long getMaximumFileSize() { - return maxFileSize; - } - - /** - * Implements the usual roll over behavior. - *

- * File is renamed File.yyyyMMddHHmmss and closed. A - * new File is created to receive further log output. - */ - // synchronization not necessary since doAppend is already synced - public void rollOver() { - if (qw != null) { - long size = ((CountingQuietWriter) qw).getCount(); - LogLog.debug("rolling over count=" + size); - // if operation fails, do not roll again until - // maxFileSize more bytes are written - nextRollover = size + maxFileSize; - } - - this.closeFile(); // keep windows happy. - - int p = fileName.lastIndexOf("."); - String file = p > 0 ? fileName.substring(0, p) : fileName; - try { - // This will also close the file. This is OK since multiple - // close operations are safe. - this.setFile(file, false, bufferedIO, bufferSize); - nextRollover = 0; - } catch (IOException e) { - if (e instanceof InterruptedIOException) { - Thread.currentThread().interrupt(); - } - LogLog.error("setFile(" + file + ", false) call failed.", e); - } - } - - public synchronized void setFile(String fileName, boolean append, - boolean bufferedIO, int bufferSize) throws IOException { - String newFileName = getLogFileName(fileName); - super.setFile(newFileName, append, this.bufferedIO, this.bufferSize); - if (append) { - File f = new File(newFileName); - ((CountingQuietWriter) qw).setCount(f.length()); - } - } - - /** - * Set the maximum size that the output file is allowed to reach before being - * rolled over to backup files. - *

- * This method is equivalent to {@link #setMaxFileSize} except that it is - * required for differentiating the setter taking a long argument - * from the setter taking a String argument by the JavaBeans - * {@link java.beans.Introspector Introspector}. - * - * @see #setMaxFileSize(String) - */ - public void setMaximumFileSize(long maxFileSize) { - this.maxFileSize = maxFileSize; - } - - /** - * Set the maximum size that the output file is allowed to reach before being - * rolled over to backup files. - *

- * In configuration files, the MaxFileSize option takes an long integer - * in the range 0 - 2^63. You can specify the value with the suffixes "KB", - * "MB" or "GB" so that the integer is interpreted being expressed - * respectively in kilobytes, megabytes or gigabytes. For example, the value - * "10KB" will be interpreted as 10240. - */ - public void setMaxFileSize(String value) { - maxFileSize = OptionConverter.toFileSize(value, maxFileSize + 1); - } - - protected void setQWForFiles(Writer writer) { - this.qw = new CountingQuietWriter(writer, errorHandler); - } - - /** - * This method differentiates RollingFileAppender from its super class. - */ - protected void subAppend(LoggingEvent event) { - super.subAppend(event); - - if (fileName != null && qw != null) { - long size = ((CountingQuietWriter) qw).getCount(); - if (size >= maxFileSize && size >= nextRollover) { - rollOver(); - } - } - } - - // Mangled file name. Append the current timestamp - private static String getLogFileName(String oldFileName) { - return oldFileName + "." + Long.toString(System.currentTimeMillis()); - } -} diff --git ql/src/java/org/apache/hadoop/hive/ql/log/SlidingFilenameRolloverStrategy.java ql/src/java/org/apache/hadoop/hive/ql/log/SlidingFilenameRolloverStrategy.java new file mode 100644 index 0000000000000000000000000000000000000000..92198e15d3dcd72d96ef4cccad7526e5404d26f3 --- /dev/null +++ ql/src/java/org/apache/hadoop/hive/ql/log/SlidingFilenameRolloverStrategy.java @@ -0,0 +1,81 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache license, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the license for the specific language governing permissions and + * limitations under the license. + */ + +package org.apache.hadoop.hive.ql.log; + +import java.io.IOException; + +import org.apache.logging.log4j.core.appender.rolling.DirectFileRolloverStrategy; +import org.apache.logging.log4j.core.appender.rolling.RollingFileManager; +import org.apache.logging.log4j.core.appender.rolling.RolloverDescription; +import org.apache.logging.log4j.core.appender.rolling.RolloverDescriptionImpl; +import org.apache.logging.log4j.core.appender.rolling.RolloverStrategy; +import org.apache.logging.log4j.core.appender.rolling.action.AbstractAction; +import org.apache.logging.log4j.core.appender.rolling.action.Action; +import org.apache.logging.log4j.core.config.Configuration; +import org.apache.logging.log4j.core.config.plugins.Plugin; +import org.apache.logging.log4j.core.config.plugins.PluginConfiguration; +import org.apache.logging.log4j.core.config.plugins.PluginFactory; + +/** + * A RolloverStrategy that does not rename files and + * uses file names that are based on a millisecond timestamp. + */ +@Plugin(name = "SlidingFilenameRolloverStrategy", + category = "Core", + printObject = true) +public class SlidingFilenameRolloverStrategy + implements RolloverStrategy, DirectFileRolloverStrategy { + + @PluginFactory + public static SlidingFilenameRolloverStrategy createStrategy( + @PluginConfiguration Configuration config) { + return new SlidingFilenameRolloverStrategy(); + } + + /** + * Do rollover with no renaming. + */ + @Override + public RolloverDescription rollover(RollingFileManager manager) + throws SecurityException { + Action shiftToNextActiveFile = new AbstractAction() { + @Override + public boolean execute() throws IOException { + return true; + } + }; + return new RolloverDescriptionImpl("ignored", false, shiftToNextActiveFile, + null); + } + + /** + * Get a new filename + */ + @Override + public String getCurrentFileName(RollingFileManager rollingFileManager) { + String pattern = rollingFileManager.getPatternProcessor().getPattern(); + return getLogFileName(pattern); + } + + /** + * @return Mangled file name formed by appending the current timestamp + */ + private static String getLogFileName(String oldFileName) { + return oldFileName + "." + Long.toString(System.currentTimeMillis()); + } +} \ No newline at end of file diff --git ql/src/test/org/apache/hadoop/hive/ql/log/TestSlidingFilenameRolloverStrategy.java ql/src/test/org/apache/hadoop/hive/ql/log/TestSlidingFilenameRolloverStrategy.java new file mode 100644 index 0000000000000000000000000000000000000000..ea5dec006633098d2ebdc35b7b214dced7d42001 --- /dev/null +++ ql/src/test/org/apache/hadoop/hive/ql/log/TestSlidingFilenameRolloverStrategy.java @@ -0,0 +1,145 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hive.ql.log; + +import java.io.IOException; +import java.nio.file.DirectoryStream; +import java.nio.file.FileAlreadyExistsException; +import java.nio.file.FileSystems; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.hive.ql.hooks.LineageLogger; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.core.Appender; +import org.apache.logging.log4j.core.config.LoggerConfig; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + +/** + * Test configuration and use of SlidingFilenameRolloverStrategy + * @see SlidingFilenameRolloverStrategy + */ +public class TestSlidingFilenameRolloverStrategy { + // properties file used to configure log4j2 + private static final String PROPERTIES_FILE = + "log4j2_test_sliding_rollover.properties"; + + // file pattern that is set in PROPERTIES_FILE + private static final String FILE_PATTERN = "./target/tmp/log/slidingTest.log"; + + @BeforeClass + public static void setUp() throws Exception { + System.setProperty("log4j.configurationFile", PROPERTIES_FILE); + } + + @AfterClass + public static void tearDown() { + System.clearProperty("log4j.configurationFile"); + LogManager.shutdown(); + } + + @Test + public void testSlidingLogFiles() throws Exception { + assertEquals("bad props file", PROPERTIES_FILE, + System.getProperty("log4j.configurationFile")); + + // Where the log files wll be written + Path logTemplate = FileSystems.getDefault().getPath(FILE_PATTERN); + String fileName = logTemplate.getFileName().toString(); + Path parent = logTemplate.getParent(); + try { + Files.createDirectory(parent); + } catch (FileAlreadyExistsException e) { + // OK, fall through. + } + + // Delete any stale log files left around from previous failed tests + deleteLogFiles(parent, fileName); + + Logger logger = LogManager.getLogger(LineageLogger.class); + + // Does the logger config look correct? + org.apache.logging.log4j.core.Logger coreLogger = + (org.apache.logging.log4j.core.Logger) logger; + LoggerConfig loggerConfig = coreLogger.get(); + Map appenders = loggerConfig.getAppenders(); + + assertNotNull("sliding appender is missing", appenders.get("sliding")); + + // Do some logging and force log rollover + int NUM_LOGS = 7; + logger.debug("Debug Message Logged !!!"); + logger.info("Info Message Logged !!!"); + + String errorString = "Error Message Logged "; + for (int i = 0; i < NUM_LOGS; i++) { + TimeUnit.MILLISECONDS.sleep(100); + // log an exception - this produces enough text to force a new logfile + // (as appender.sliding.policies.size.size=1KB) + logger.error(errorString + i, + new RuntimeException("part of a test")); + } + + // Check log files look OK + DirectoryStream stream = + Files.newDirectoryStream(parent, fileName + ".*"); + int count = 0; + for (Path path : stream) { + count++; + String contents = new String(Files.readAllBytes(path), "UTF-8"); + // There should be one exception message per file + assertTrue("File " + path + " did not have expected content", + contents.contains(errorString)); + String suffix = StringUtils.substringAfterLast(path.toString(), "."); + // suffix should be a timestamp + try { + long timestamp = Long.parseLong(suffix); + } catch (NumberFormatException e) { + fail("Suffix " + suffix + " is not a long"); + } + } + assertEquals("bad count of log files", NUM_LOGS, count); + + // Check there is no log file without the suffix + assertFalse("file should not exist:" + logTemplate, + Files.exists(logTemplate)); + + // Clean up + deleteLogFiles(parent, fileName); + } + + private void deleteLogFiles(Path parent, String fileName) throws IOException { + DirectoryStream stream = + Files.newDirectoryStream(parent, fileName + ".*"); + for (Path path : stream) { + Files.delete(path); + } + } +} diff --git ql/src/test/resources/log4j2_test_sliding_rollover.properties ql/src/test/resources/log4j2_test_sliding_rollover.properties new file mode 100644 index 0000000000000000000000000000000000000000..b88b79af0490eed51c76dd813ef380e6d2937405 --- /dev/null +++ ql/src/test/resources/log4j2_test_sliding_rollover.properties @@ -0,0 +1,69 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +# Log4j2 config that is used by TestSlidingFilenameRolloverStrategy + +status = INFO +name = Slider +packages = org.apache.hadoop.hive.ql.log + +# list of properties +property.hive.log.level = DEBUG +property.hive.root.logger = console +property.hive.log.dir = ${sys:test.tmp.dir}/log +property.hive.log.file = hive.log +property.hive.test.console.log.level = INFO + +# list of all appenders +appenders = console, sliding + +# list of loggers that are not root +loggers = lineage + +# console appender +appender.console.type = Console +appender.console.name = console +appender.console.target = SYSTEM_ERR +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %d{ISO8601} %5p [%t] %c{2}: %m%n + +# root logger +rootLogger.level = ${sys:hive.log.level} +rootLogger.appenderRefs = root, console +rootLogger.appenderRef.root.ref = ${sys:hive.root.logger} +rootLogger.appenderRef.console.ref = console +rootLogger.appenderRef.console.level = ${sys:hive.test.console.log.level} + +# sliding appender +appender.sliding.type = RollingFile +appender.sliding.name = sliding +appender.sliding.filePattern = ./target/tmp/log/slidingTest.log +appender.sliding.layout.type = PatternLayout +appender.sliding.layout.pattern = %m%n +appender.sliding.layout.charset = UTF-8 +appender.sliding.policies.type = Policies +appender.sliding.policies.size.type = SizeBasedTriggeringPolicy +appender.sliding.policies.size.size=1KB +appender.sliding.strategy.type = SlidingFilenameRolloverStrategy + +# lineage logger +logger.lineage.name = org.apache.hadoop.hive.ql.hooks.LineageLogger +logger.lineage.level = debug +logger.lineage.appenderRefs = sliding +logger.lineage.appenderRef.file.ref = sliding + + diff --git testutils/ptest2/pom.xml testutils/ptest2/pom.xml index 8563b54ca2d92659c0d5bccec2a4c6b7672761b5..e364817d9fd87b712d84dfa72e2ec250aa9a5359 100644 --- testutils/ptest2/pom.xml +++ testutils/ptest2/pom.xml @@ -26,7 +26,7 @@ limitations under the License. hive-ptest UTF-8 - 2.6.2 + 2.8.2 3.2.16.RELEASE 2.0.0 ${basedir}/../../checkstyle/