Description
File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
from line 205:
if (children == null || children.length == 0) { children = new FSDir[maxBlocksPerDir]; for (int idx = 0; idx < maxBlocksPerDir; idx++) { children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)); } }
in FSDir constructer method if faild ( space full,so mkdir fails ), but the children still in use !
the the write comes(after I run balancer ) , when choose FSDir
line 192:
File file = children[idx].addBlock(b, src, false, resetIdx);
cause exceptions like this
java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.addBlock(FSDataset.java:495)
------------------------------------------------
should it like this
if (children == null || children.length == 0) { List childrenList = new ArrayList(); for (int idx = 0; idx < maxBlocksPerDir; idx++) { try{ childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx))); }catch(Exception e){ } children = childrenList.toArray(); } }
----------------------------
bad consequence , in my cluster ,this datanode's num blocks became 0 .