Uploaded image for project: 'Hadoop Distributed Data Store'
  1. Hadoop Distributed Data Store
  2. HDDS-3869

Use different column families for datanode block and metadata



    • Target Version/s:


      Currently datanodes place all of their data under the default column family in RocksDB. This differs from OM and SCM which organize their data into different column families based on its type. This feature will first move the datanode code off of the database utilities in the hadoop.hdds.utils package (which has no column family support), and move them to the newer utilities used by OM and SCM in the hadoop.hdds.utils.db package (which has column family support). The datanode will divide its data into three column families:

      1. block_data: String keys (block id with optional prefix) map to BlockData objects
      2. metadata: String keys (name of metadata field) map to Long objects.
      3. deleted_blocks: String keys (block id with optional prefix) map to the ChunkInfo lists (lists of chunks corresponding to the block that was deleted).

      A new field, called 'schemaVersion' will be added to container files to indicate whether the container was created using the original schema version 1, where everything was in the default column family, or this new schema version 2. Code should be able to process older schema versions for backwards compatibility.


          Issue Links



              • Assignee:
                erose Ethan Rose
                erose Ethan Rose
              • Votes:
                0 Vote for this issue
                6 Start watching this issue


                • Created: