Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
We have one subdirectory for each data volume under a db volume, with the StorageID(UUID) of the data volume as the name of the subdirectory.
When extra SSDs are used
- Should create DbVolume to manage it for bad disk checking
- Should create special directory structure including clusterID to be isolated with other stuff on the disk, e.g. /ssd1/db/CID-<clusterID>
- Should have a configuration item: “hdds.datanode.container.db.dir”
- HddsVolume should be mapped to a dedicated subdir named after the StorageID of it, e.g. DS-b559933f-9de3-4da4-a634-07d3a94f7438/
- Container metafile (e.g. 1.container under metadata) does not have to record the dbPath since now it is bind to the HddsVolume and we already know which HddsVolume the container resides.
/ssd1/
|
` --db
|
`-- VERSION
`-- CID-4886ca17-9739-4bc7-8e41-d97bb1175a76
|
`--DS-b559933f-9de3-4da4-a634-07d3a94f7438
|
|-- container.db ← rocksdb instance
`-- db.checkpoints
When no SSD, use the same disk as data by default
- Create container with the same disk where the block files resides
- No DbVolume created, e.g. hddsVolume.dbVolume = null
- Configuration item not specified
- Could be easily migrated to a newly added SSD, e.g.
mv /data1/hdds/CID-<clusterID>/DS-<StorageID> /ssd1/db/CID-<clusterID>/
/data1
|
`-- hdds
|
`-- VERSION
`-- CID-4886ca17-9739-4bc7-8e41-d97bb1175a76
|
|-- current
| |
| `--containerDir0
|
`-- DS-b559933f-9de3-4da4-a634-07d3a94f7438
|
|-- container.db ← rocksdb instance
`-- db.checkpoints|
Attachments
Issue Links
- links to