Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.2.0
-
None
-
RHEL 6.2 64-bit
Description
If I use HDFSEventSink and specify the codec to be snappy, then the sink writes data to HDFS with the ".snappy" extension... but the content of those HDFS files is not in snappy format when the snappy libs aren't found. The log files mention this:
2012-05-11 19:38:49,868 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2012-05-11 19:38:49,868 WARN snappy.LoadSnappy: Snappy native library not loaded
...and I think it should be an error rather than a warning... the sink shouldn't write data at all to HDFS if it's not in the format expected by the config file (ie, not compressed with snappy). The config file I used is:
agent.channels = c1
agent.sources = r1
agent.sinks = k1
#
agent.channels.c1.type = MEMORY
#
agent.sources.r1.channels = c1
agent.sources.r1.type = SEQ
#
agent.sinks.k1.channel = c1
agent.sinks.k1.type = LOGGER
#
agent.sinks.k1.channel = c1
agent.sinks.k1.type = HDFS
agent.sinks.k1.hdfs.path = hdfs://<host>:<port>:<path>
agent.sinks.k1.hdfs.fileType = DataStream
agent.sinks.k1.hdfs.codeC = SnappyCodec