Details
-
Improvement
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
3.0.0-alpha4
-
None
-
None
-
cluster: 3 nodes
os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, Ubuntu4.4.0-31-generic)
hadoop version: hadoop-3.0.0-alpha4
Description
When i read the code of Instantiate FsDatasetImpl object on DataNode start process, i find that the getVolumeMap function actually can't get ReplicaMap info for each fsVolume, the reason is fsVolume's bpSlices hasn't been initialized in this time, the detail code as follows:
void getVolumeMap(ReplicaMap volumeMap, final RamDiskReplicaTracker ramDiskReplicaMap) throws IOException { LOG.info("Added volume - getVolumeMap bpSlices:" + bpSlices.values().size()); for(BlockPoolSlice s : bpSlices.values()) { s.getVolumeMap(volumeMap, ramDiskReplicaMap); } }
Then, i have add some info log and start DataNode, the log info cord with the code description, the detail log info as follows:
INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: DISK, getVolumeMap begin
INFO Added volume - getVolumeMap bpSlices:0
INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: DISK, getVolumeMap end
INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: DISK
INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap begin
INFO Added volume - getVolumeMap bpSlices:0
INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap end
INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
At last i think the getVolumeMap process for each fsVloume not necessary when Instantiate FsDatasetImpl object.