Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
2.4.0
-
None
Description
[info] - download list of files to local (10 milliseconds) [info] "http:///work/apache/spark/target/tmp/spark-a17bc160-641b-41e1-95be-a2e31b175e09/testJar3393247632492201277.jar" did not start with substring "file:" (SparkSubmitSuite.scala:1022) [info] org.scalatest.exceptions.TestFailedException: [info] at org.scalatest.MatchersHelper$.indicateFailure(MatchersHelper.scala:340) [info] at org.scalatest.Matchers$ShouldMethodHelper$.shouldMatcher(Matchers.scala:6668) [info] at org.scalatest.Matchers$AnyShouldWrapper.should(Matchers.scala:6704) [info] at org.apache.spark.deploy.SparkSubmitSuite.org$apache$spark$deploy$SparkSubmitSuite$$testRemoteResources(SparkSubmitSuite.scala:1022) [info] at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$18.apply$mcV$sp(SparkSubmitSuite.scala:962) [info] at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$18.apply(SparkSubmitSuite.scala:962) [info] at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$18.apply(SparkSubmitSuite.scala:962)
That's because Hadoop 2.9 supports http as a file system, and the test expects the Hadoop libraries not to. I also found a couple of other bugs in the test (although the code itself for the feature is fine).