Details
-
Improvement
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
3.2.0
-
None
-
None
Description
I wanted to investigate dfs.datanode.max.transfer.threads from hdfs-site.xml. It is described as "Specifies the maximum number of threads to use for transferring data in and out of the DN." The default value is 4096. I found it interesting because 4096 threads sounds like a lot to me. I'm not sure how a system with 8-16 cores would react to this large a thread count. Intuitively, I would say that the overhead of context switching would be immense.
During my investigation, I discovered the following setup in the DataXceiverServer class:
- A peer connects to a DataNode
- A new thread is spun up to service this connection
- The thread runs to completion
- The tread dies
It would perhaps be better if we used a thread pool to better manage the lifecycle of the service threads and to allow the DataNode to re-use existing threads, saving on the need to create and spin-up threads on demand.
In this JIRA, I have added a couple of things:
- Added a thread pool to DataXceiverServer class that, on demand, will create up to dfs.datanode.max.transfer.threads. A thread that has completed its prior duties will stay idle for up to 60 seconds (configurable), it will be retired if no new work has arrived.
- Added new methods to the Peer Interface to allow for better logging and less code within each Thread (DataXceiver).
- Updated the Thread code (DataXceiver) regarding its interactions with blockReceiver instance variable
Attachments
Attachments
Issue Links
- relates to
-
HDFS-12288 Fix DataNode's xceiver count calculation
- Resolved
-
HDFS-14295 Add Threadpool for DataTransfers
- Resolved
- links to