Uploaded image for project: 'Hadoop Distributed Data Store'
  1. Hadoop Distributed Data Store
  2. HDDS-2713

Decouple client write size and datanode chunk size

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: In Progress
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Ozone Datanode, SCM Client
    • Labels:
      None
    • Target Version/s:

      Description

      Currently Datanode creates chunk files as they are received from HDDS client. This creates a conflict between requirements: client would like to use less memory for buffering, but datanode needs to avoid small files.

      The goal of this task is to decouple client and server write sizes to allow the client to send data in smaller increments without affecting datanode storage.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                adoroszlai Attila Doroszlai
                Reporter:
                adoroszlai Attila Doroszlai
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated: