Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Resolved
-
3.2.2
-
None
-
None
-
None
Description
recently, i use fs.create(path, true/* createOrOverwrite */) to write data into parquet file a, but when SocketTimeoutException happend, such as " org.apache.hadoop.io.retry.RetryInvocationHandler [] - java.net.SocketTimeoutException: Call From xxx to namenodexxx:8888 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/node:33416 remote=namenode:8888]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.create over namenode:8888. Trying to failover immediately." then i found the size of file a is zero, and read with error "file a is not a parquet file", and there were two create calls from two different routers in hdfs audit log, so i think overwrite is not safe in some situation
Attachments
Issue Links
- is fixed by
-
HDFS-15079 RBF: Client maybe get an unexpected result with network anomaly
- Resolved