Flume
  1. Flume
  2. FLUME-2019

(SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)

    Details

    • Type: Question Question
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: v1.3.1
    • Fix Version/s: v1.3.1
    • Component/s: Sinks+Sources
    • Labels:
    • Environment:

      Ubuntu 12.04

    • Release Note:
      Build related issue

      Description

      I am getting the below error, when I try to upload a file into Hadoop HDFS.

      2013-04-23 12:06:39,141 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)] process failed
      java.lang.NoSuchMethodError: com.google.common.cache.CacheBuilder.build()Lcom/google/common/cache/Cache;

      Flume.conf is as below
      ----------------------

      a1.sources = r1
      a1.sinks = k1
      a1.channels = c1

      a1.sources.r1.type = netcat
      a1.sources.r1.bind = localhost
      a1.sources.r1.port = 44444

      a1.sinks.k1.type = hdfs
      a1.sinks.k1.channel = c1
      a1.sinks.k1.hdfs.path = hdfs://localhost:8020/projects
      a1.sinks.k1.hdfs.hdfs.maxOpenFiles = 10000

      a1.channels.c1.type = memory
      a1.channels.c1.capacity = 100000
      a1.channels.c1.transactionCapacity = 100

      a1.sources.r1.channels = c1
      a1.sinks.k1.channel = c1

        Activity

        Hide
        Israel Ekpo added a comment -

        From the exception message, it looks like the JAR file you are using is not the same version as the one the library was compiled with.

        You might want to check you lib directory to make sure that there are no JARs in your classpath that is overriding this dependency.

        Also, I think you should close this issue and send an email instead to the user mailing list (user@flume.apache.org)

        Show
        Israel Ekpo added a comment - From the exception message, it looks like the JAR file you are using is not the same version as the one the library was compiled with. You might want to check you lib directory to make sure that there are no JARs in your classpath that is overriding this dependency. Also, I think you should close this issue and send an email instead to the user mailing list (user@flume.apache.org)
        Hide
        Kanikkannan added a comment -

        Will build with appropriate jar files and test. Closing the issue

        Show
        Kanikkannan added a comment - Will build with appropriate jar files and test. Closing the issue
        Hide
        adarsh added a comment -

        Hi,

        I am getting same error for flume 1.5.2

        2015-01-08 05:15:21,941 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)] HDFS IO error
        org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hdfs/gy/lpn1/FlumeData.1420710148522.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.

        conf file :

        agent.sources = Monitor
        agent.channels = memoryChannel
        agent.sinks = hdfsSink

        1. The channel can be defined as follows.
          agent.sources.Monitor.channels = memoryChannel
          agent.sources.Monitor.type = exec
          agent.sources.Monitor.command = cat /apps/scope/alerts/logs/monitor.log
        2. Each sink's type must be defined
          agent.sinks.hdfsSink.type = hdfs
          agent.sinks.hdfsSink.hdfs.path = hdfs://11.120.93.20:8020/user/hdfs/abc
          agent.sinks.hdfsSink.channel = memoryChannel
          agent.sinks.hdfsSink.rollCount = 6000
          agent.sinks.hdfsSink.rollInterval = 15
          agent.sinks.hdfsSink.rollSize = 209715200
          agent.sinks.hdfsSink.batchSize =1000
          agent.sinks.hdfsSink.fileType = DataStream
          agent.sinks.hdfsSink.callTimeout = 3600000
        1. Each channel's type is defined.
          agent.channels.memoryChannel.type = memory
        1. Other config values specific to each type of channel(sink or source)
        2. can be defined as well
        3. In this case, it specifies the capacity of the memory channel
          agent.channels.memoryChannel.capacity = 100

        Please suggest some solution.

        Thanks.

        Show
        adarsh added a comment - Hi, I am getting same error for flume 1.5.2 2015-01-08 05:15:21,941 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)] HDFS IO error org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hdfs/gy/lpn1/FlumeData.1420710148522.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. conf file : agent.sources = Monitor agent.channels = memoryChannel agent.sinks = hdfsSink The channel can be defined as follows. agent.sources.Monitor.channels = memoryChannel agent.sources.Monitor.type = exec agent.sources.Monitor.command = cat /apps/scope/alerts/logs/monitor.log Each sink's type must be defined agent.sinks.hdfsSink.type = hdfs agent.sinks.hdfsSink.hdfs.path = hdfs://11.120.93.20:8020/user/hdfs/abc agent.sinks.hdfsSink.channel = memoryChannel agent.sinks.hdfsSink.rollCount = 6000 agent.sinks.hdfsSink.rollInterval = 15 agent.sinks.hdfsSink.rollSize = 209715200 agent.sinks.hdfsSink.batchSize =1000 agent.sinks.hdfsSink.fileType = DataStream agent.sinks.hdfsSink.callTimeout = 3600000 Each channel's type is defined. agent.channels.memoryChannel.type = memory Other config values specific to each type of channel(sink or source) can be defined as well In this case, it specifies the capacity of the memory channel agent.channels.memoryChannel.capacity = 100 Please suggest some solution. Thanks.
        Hide
        adarsh added a comment -

        Checked all hadoop related jars. Still same error.

        Thanks

        Show
        adarsh added a comment - Checked all hadoop related jars. Still same error. Thanks
        Hide
        adarsh added a comment -

        Hi kanik did u get any solution for the issue. i am facing the same tried multiple times with different jars.

        Thanks

        Show
        adarsh added a comment - Hi kanik did u get any solution for the issue. i am facing the same tried multiple times with different jars. Thanks
        Hide
        Hari Shreedharan added a comment -

        Looks like you have Guava missing in your classpath? can you run:
        flume-ng -f <config-file> -c <conf-dir> -d

        Show
        Hari Shreedharan added a comment - Looks like you have Guava missing in your classpath? can you run: flume-ng -f <config-file> -c <conf-dir> -d

          People

          • Assignee:
            Unassigned
            Reporter:
            Kanikkannan
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development