Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
2.0.0-alpha
-
-
Reviewed
Description
When upgrading from 1.x to 2.0.0, the SecondaryNameNode can fail to start up:
2012-06-16 09:52:33,812 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint java.io.IOException: Inconsistent checkpoint fields. LV = -40 namespaceID = 64415959 cTime = 1339813974990 ; clusterId = CID-07a82b97-8d04-4fdd-b3a1-f40650163245 ; blockpoolId = BP-1792677198-172.29.121.67-1339813967723. Expecting respectively: -19; 64415959; 0; ; . at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:120) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:454) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:334) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:301) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:438) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:297) at java.lang.Thread.run(Thread.java:662)
The error check we're hitting came from HDFS-1073, and it's intended to verify that we're connecting to the correct NN. But the check is too strict and considers "different metadata version" to be the same as "different clusterID".
I believe the check in doCheckpoint simply needs to explicitly check for and handle the update case.