Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.1.0
-
None
Description
During the initialization of SCM, it will generate a random clusterID and scmUuid, which will be sent to Datanode during their Version request.
On Datanode, The disk layout per volume is as follows:
../hdds/VERSION ../hdds/<<scmUuid>>/current/<<containerDir>>/<<containerID>>/metadata ../hdds/<<scmUuid>>/current/<<containerDir>>/<<containerID>>/<<dataDir>>
The metadata file for a typical container is as follows:
[ozoneadmin@tdw-9-179-144-104 /data6/hdds/hdds/326a5fe1-e63c-44b6-a57e-2f858fe4eaa7/current/containerDir0/262574/metadata]$ cat 262574.container !<KeyValueContainerData> checksum: f1473bc5a9f9fa307edf2040edf0fb5ac40912a5bd8610e2d42fa26693cc4b8f chunksPath: /data6/hdds/hdds/326a5fe1-e63c-44b6-a57e-2f858fe4eaa7/current/containerDir0/262574/chunks containerDBType: RocksDB containerID: 262574 containerType: KeyValueContainer layOutVersion: 2 maxSize: 5368709120 metadata: {} metadataPath: /data6/hdds/hdds/326a5fe1-e63c-44b6-a57e-2f858fe4eaa7/current/containerDir0/262574/metadata originNodeId: 26395209-1233-439a-9020-0ad8d6a8248e originPipelineId: cf7bd510-ff60-4d35-9746-7463d30e13af state: CLOSED
A typical SCM group consists of 3 SCMs. If each of the them has its own scmUuid and clusterID, it will cause chaos in Datanode side.
For now, we just hard code the clusterID and scmUuid on HDDS-2823, so that we can setup a SCM raft group.
We need figure out a decent solution when merging back HDDS-2823 back to master.
Attachments
Issue Links
- is fixed by
-
HDDS-5432 Enable downgrade testing after 1.1.0 release
- Resolved
- links to