Description
fs = createFileSystem(uri, conf); synchronized (this) { // refetch the lock again FileSystem oldfs = map.get(key); if (oldfs != null) { // a file system is created while lock is releasing fs.close(); // close the new file system return oldfs; // return the old file system } // now insert the new file system into the map if (map.isEmpty() && !ShutdownHookManager.get().isShutdownInProgress()) { ShutdownHookManager.get().addShutdownHook(clientFinalizer, SHUTDOWN_HOOK_PRIORITY); } fs.key = key; map.put(key, fs); if (conf.getBoolean( FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) { toAutoClose.add(key); } return fs; }
The lock now has a ShutdownHook creation, which ends up doing
HookEntry(Runnable hook, int priority) { this(hook, priority, getShutdownTimeout(new Configuration()), TIME_UNIT_DEFAULT); }
which ends up doing a "new Configuration()" within the locked section.
This indirectly hurts the cache hit scenarios as well, since if the lock on this is held, then the other section cannot be entered either.
I/O Setup 0 State: BLOCKED CPU usage on sample: 6ms org.apache.hadoop.fs.FileSystem$Cache.getInternal(URI, Configuration, FileSystem$Cache$Key) FileSystem.java:3345 org.apache.hadoop.fs.FileSystem$Cache.get(URI, Configuration) FileSystem.java:3320 org.apache.hadoop.fs.FileSystem.get(URI, Configuration) FileSystem.java:479 org.apache.hadoop.fs.FileSystem.getLocal(Configuration) FileSystem.java:435
slowing down the RawLocalFileSystem when there are other threads creating HDFS FileSystem objects at the same time.
Attachments
Issue Links
- relates to
-
HADOOP-15679 ShutdownHookManager shutdown time needs to be configurable & extended
- Resolved
- links to