Add a test to verify that userinfo data is (correctly) used to differentiate the entries in the FS cache, so are treated as different filesystems.
- This is criticalk for wasb, which uses the username to identify the container, in a path like wasb:email@example.com. This works in Hadoop, but
SPARK-22587shows that it may not be followed everywhere (and given there's no documentation, who can fault them?)
- AbstractFileSystem.checkPath looks suspiciously like it's path validation just checks host, not authority. That needs a test too.
- And we should cut the @LimitedPrivate(HDFS, Mapreduce) from Path.makeQualified. If MR needs it, it should be considered open to all apps using the Hadoop APIs. Until I looked at the code I thought it was...
- relates to
SPARK-22587 Spark job fails if fs.defaultFS and application jar are different url