Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.18.0
-
None
-
None
-
Simplify generation stamp upgrade by making is a local upgrade on datandodes. Deleted distributed upgrade.
Description
- The generation stamp upgrade renames blocks' meta-files so that the name contains the block generation stamp as stated in
HADOOP-2656.
If a data-node has blocks that do not belong to any files and the name-node asks the data-node to remove those blocks
during or before the upgrade started the data-node will remove the blocks but not the meta-files because their names
are still in the old format which is not recognized by the new code. So we can end up with a number of garbage files which
will be hard to recognize that they are unused and the system will never remove them automatically.
I think this should be handled by the upgrade code in the end, but may be it will be right to fixHADOOP-3002for the 0.18 release,
which will avoid scheduling block removal when the name-node is in safe-mode. - I was not able to get the upgrade -force option to work. This option lets the name-node proceed with a distributed upgrade even if
the data-nodes are not able to complete their local upgrades. Did we test this feature at all for the generation stamp upgrade?
Attachments
Attachments
Issue Links
- relates to
-
HADOOP-3002 HDFS should not remove blocks while in safemode.
- Closed
One workaround is as follows:
1. Shut down namenode and then restart namenode (with existing release). This will cause datanodes to send block reports and delete blocks that are not in the namespace.
2. Shutdown cluster. Install new software on all nodes. Restart with -upgrade option. This will not have to delete blocks becuase orphaned blocks were already deleted in Step-1.
If this workaround sounds feasible, then we can remove this issue from the 0.18 Blocker list.