Details
Description
When driver failed over, it will read WAL from HDFS by calling WriteAheadLogBackedBlockRDD.getBlockFromWriteAheadLog(), however, it need a dummy local path to satisfy the method parameter requirements, but the path in windows will contain a colon which is not valid for hadoop. I removed the potential driver letter and colon.
I found one email from spark-user ever talked about this bug (https://www.mail-archive.com/user@spark.apache.org/msg55030.html)
Attachments
Issue Links
- duplicates
-
SPARK-25778 WriteAheadLogBackedBlockRDD in YARN Cluster Mode Fails due lack of access to tmpDir from $PWD to HDFS
- Resolved
- links to