FileContext keeps a pointer to the default-filesystem (ie the root or the slash-filesystem).
Any methods that pass a URI to a different file system will result in a new instance of the file system (AbstractFileSystem) being
So the question is should we add a cache? I filed this Jira to explore this question.
Option1: Do not add a cache, but do keep a pointer to the default filesystem (ie the slash).
It is okay to creat a new java object for each URI file system being accessed. The RPC layer reuses connections to the same HDFS so caching filesystems is not necessary to reuse a connection. But we need to add a exit hook to close the open leases on JVM exit (note the old FileSystem has an exit hook on the cache which indirectly flushes the open leases on exit or on close.) .
Option2: Add a AbstractFileSystem cache. This raises the following issue. Recently Hadoop-4655 added FileSystem#newInstace() so that Facebook's Scribe subsystem could bypass the cache. Doing this is a little ugly in general because the notion of the cache is leaking through the interface; further this is hard to do with FileContext/AbstractFileSystem because applications do not create instances of AbstractFileSystem directly (FileContext does it automatically as needed).