Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-17268

when SocketTimeoutException happen, overwrite mode can delete old data, and make file empty

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Resolved
    • 3.2.2
    • None
    • None
    • None

    Description

      recently, i use fs.create(path, true/* createOrOverwrite */) to write data into parquet file a, but when SocketTimeoutException happend, such as " org.apache.hadoop.io.retry.RetryInvocationHandler [] - java.net.SocketTimeoutException: Call From xxx to namenodexxx:8888 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/node:33416 remote=namenode:8888]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.create over namenode:8888. Trying to failover immediately." then i found the size of file a is zero, and read with error "file a is not a parquet file", and there were two create calls from two different routers in hdfs audit log, so i think overwrite is not safe in some situation

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              katty0924 katty he
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: