Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1407

Use Block in DataTransferProtocol

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.22.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently DataTransferProtocol has methods such as:

          public static void opReadBlock(DataOutputStream out, long blockId,
              long blockGs, long blockOffset, long blockLen, String clientName,
              Token<BlockTokenIdentifier> blockToken) throws IOException;
      

      The client has to pass the individual elements that make block identification such as blockId and generation stamp. I propose methods with the following format:

          public static void opReadBlock(DataOutputStream out, Block block,
              long blockOffset, long blockLen, String clientName,
              Token<BlockTokenIdentifier> blockToken) throws IOException;
      

      With this, the client need not understand the internals of Block. It receives Block over RPC and sends it in DataTransferProtocol. This helps in making Block opaque to the client.

        Attachments

        1. HDFS-1400.trunk.patch
          40 kB
          Suresh Srinivas
        2. HDFS-1400.trunk.patch
          40 kB
          Suresh Srinivas

          Issue Links

            Activity

              People

              • Assignee:
                sureshms Suresh Srinivas
                Reporter:
                sureshms Suresh Srinivas
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: