Details
Description
When a DataNode data transfers a block, is spins up a new thread for each transfer. Here and Here. Instead, add the threads to a CachedThreadPool so that when their threads complete the transfer, they can be re-used for another transfer. This should save resources spent on creating and spinning up transfer threads.
One thing I'll point out that's a bit off, which I address in this patch, ...
There are two places in the code where a DataTransfer thread is started. In one place, it's started in a default thread group. In another place, it's started in the dataXceiverServer thread group.
I do not think it's correct to include any of these threads in the dataXceiverServer thread group. Anything submitted to the dataXceiverServer should probably be tied to the dfs.datanode.max.transfer.threads configurations, and neither of these methods are. Instead, they should be submitted into the same thread pool with its own thread group (probably the default thread group, unless someone suggests otherwise) and is what I have included in this patch.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-14292 Introduce Java ExecutorService to DataXceiverServer
- Patch Available
- links to