Uploaded image for project: 'Flume'
  1. Flume
  2. FLUME-2132

Exception while syncing from Flume to HDFS

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 1.3.0
    • 1.7.0
    • Sinks+Sources
    • Flume 1.3.0, Hadoop 1.2.0, 8GB RAM, Intel Pentium core 2 duo

    Description

      I'm running hadoop 1.2.0 and flume 1.3.0. Every thing works fine if its independently run. When I start my tomcat I get the below exception after some time.

      2013-07-17 12:40:35,640 (ResponseProcessor for block blk_5249456272858461891_436734) [WARN - org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3015)] DFSOutputStream ResponseProcessor exception for block blk_5249456272858461891_436734java.net.SocketTimeoutException: 63000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:24433 remote=/127.0.0.1:50010]
      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
      at java.io.DataInputStream.readFully(DataInputStream.java:195)
      at java.io.DataInputStream.readLong(DataInputStream.java:416)
      at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2967)

      2013-07-17 12:40:35,800 (hdfs-hdfs-write-roll-timer-0) [WARN - org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:277)] failed to close() HDFSWriter for file (hdfs://localhost:9000/flume/Broadsoft_App2/20130717/jboss/Broadsoft_App2.1374044838498.tmp). Exception follows.
      java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3096)
      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2589)
      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2793)

      Java snippet for Configuraion

      configuration.set("fs.default.name", "hdfs://localhost:9000");
      configuration.set("mapred.job.tracker", "hdfs://localhost:9000");

      I'm using a single datanode to read the files that where written to hdfs by flume, my java program just reads the files from hdfs to show it on the screen nothing much.

      Attachments

        Activity

          People

            Unassigned Unassigned
            avyakrita Divya R
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: