Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1407

Use Block in DataTransferProtocol

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.22.0
    • None
    • None
    • Reviewed

    Description

      Currently DataTransferProtocol has methods such as:

          public static void opReadBlock(DataOutputStream out, long blockId,
              long blockGs, long blockOffset, long blockLen, String clientName,
              Token<BlockTokenIdentifier> blockToken) throws IOException;
      

      The client has to pass the individual elements that make block identification such as blockId and generation stamp. I propose methods with the following format:

          public static void opReadBlock(DataOutputStream out, Block block,
              long blockOffset, long blockLen, String clientName,
              Token<BlockTokenIdentifier> blockToken) throws IOException;
      

      With this, the client need not understand the internals of Block. It receives Block over RPC and sends it in DataTransferProtocol. This helps in making Block opaque to the client.

      Attachments

        1. HDFS-1400.trunk.patch
          40 kB
          Suresh Srinivas
        2. HDFS-1400.trunk.patch
          40 kB
          Suresh Srinivas

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            sureshms Suresh Srinivas
            sureshms Suresh Srinivas
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment