Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-2320

Make merged protocol changes from 0.20-append to 0.20-security compatible with previous releases.

    Details

    • Hadoop Flags:
      Reviewed

      Description

      0.20-append changes have been merged to 0.20-security. The merge has changes to version numbers in several protocols. This jira makes the protocol changes compatible with older release, allowing clients running older version to talk to server running 205 version and clients running 205 version talk to older servers running 203, 204.

      1. HDFS-2320.patch
        8 kB
        Suresh Srinivas
      2. HDFS-2320.patch
        9 kB
        Suresh Srinivas

        Activity

        Hide
        Matt Foley added a comment -

        Closed upon release of 0.20.205.0

        Show
        Matt Foley added a comment - Closed upon release of 0.20.205.0
        Hide
        Suresh Srinivas added a comment -

        I committed this patch. This patch is only relevant to 0.20.205, due to merging 0.20-append changes and is not relevant to trunk which has a new implementation of append.

        Show
        Suresh Srinivas added a comment - I committed this patch. This patch is only relevant to 0.20.205, due to merging 0.20-append changes and is not relevant to trunk which has a new implementation of append.
        Hide
        Jitendra Nath Pandey added a comment -

        +1. The patch looks good to me.

        Show
        Jitendra Nath Pandey added a comment - +1. The patch looks good to me.
        Hide
        Suresh Srinivas added a comment -

        Updated patch to remove an unnecessary file change.

        I also tested this, killing datanodes while running TestDFSIO

        Show
        Suresh Srinivas added a comment - Updated patch to remove an unnecessary file change. I also tested this, killing datanodes while running TestDFSIO
        Hide
        Suresh Srinivas added a comment -

        I did the following manual test TestDFSIO write and read tests:

        1. From 204 client against 205 server with my change.
        2. From 205 client against 204 server

        Both tests passed. Without my patch I see version mismatch failure.

        Show
        Suresh Srinivas added a comment - I did the following manual test TestDFSIO write and read tests: From 204 client against 205 server with my change. From 205 client against 204 server Both tests passed. Without my patch I see version mismatch failure.
        Hide
        Suresh Srinivas added a comment -

        I plan to do that testing manually. Thanks for the validation Todd.

        Show
        Suresh Srinivas added a comment - I plan to do that testing manually. Thanks for the validation Todd.
        Hide
        Todd Lipcon added a comment -

        Your logic seems reasonable. Have you tested this somehow? eg running a TestDFSIO with the new client pointed at the old cluster and killing a node or two while it's going? And vice versa?

        Show
        Todd Lipcon added a comment - Your logic seems reasonable. Have you tested this somehow? eg running a TestDFSIO with the new client pointed at the old cluster and killing a node or two while it's going? And vice versa?
        Hide
        Suresh Srinivas added a comment -

        0.20-append changes

        ClientProtocol.java - version changed from 61 to 63

        1. Added boolean ClientProtocol#recoverLease(String src, String clientName). DistributedFileSystem#recoverLease() exposes this method to Applications
        2. Added LocatedBlock addBlock(String src, String clientName, DatanodeInfo[] excludedNodes)

        Compatibility

        1. recoverLease() - Only used by HBase current. HBase currently checks for existence of this method before calling it. No backward compatibility issues.
        2. addBlock() - DFSClient tracks the support for this method using a flag serverSupportsHDFS630. The flag is set to false on getting exception from server. No backward compatibility issue.

        DatanodeProtocol.java - version changed from 25 to 26

        Changed method nextGenerationStamp(Block ) -> nextGenerationStamp(Block, boolean)

        Compatibility
        This method is used only by Datanode. Since the whole cluster is upgraded, datanodes will run newer version of the protocol. This does not affect client compatibility, as the client does not use this RPC call.

        ClientDatanodeProtocol.java - version changed from 4 to 5

        Added a new method getBlockInfo(Block block) used by the client.

        Compatibility
        When new client talks to old server, to read a file that is being written to, this will result in debug logs that print this exception.

        Required change
        Add a flag in DFSClient to detect no support for this method and handle it accordingly. This avoids having to make a method call all the way to the server, catch an exception and print an exception.

        DataTransferProtocol.java - version changed from 17 to 19

        The changes in this are compatible. The version change is unnecessary.

        Given that the protocol changes does not affect the client, I propose reverting the changes to version number in the protocols.

        Show
        Suresh Srinivas added a comment - 0.20-append changes ClientProtocol.java - version changed from 61 to 63 Added boolean ClientProtocol#recoverLease(String src, String clientName). DistributedFileSystem#recoverLease() exposes this method to Applications Added LocatedBlock addBlock(String src, String clientName, DatanodeInfo[] excludedNodes) Compatibility recoverLease() - Only used by HBase current. HBase currently checks for existence of this method before calling it. No backward compatibility issues. addBlock() - DFSClient tracks the support for this method using a flag serverSupportsHDFS630. The flag is set to false on getting exception from server. No backward compatibility issue. DatanodeProtocol.java - version changed from 25 to 26 Changed method nextGenerationStamp(Block ) -> nextGenerationStamp(Block, boolean) Compatibility This method is used only by Datanode. Since the whole cluster is upgraded, datanodes will run newer version of the protocol. This does not affect client compatibility, as the client does not use this RPC call. ClientDatanodeProtocol.java - version changed from 4 to 5 Added a new method getBlockInfo(Block block) used by the client. Compatibility When new client talks to old server, to read a file that is being written to, this will result in debug logs that print this exception. Required change Add a flag in DFSClient to detect no support for this method and handle it accordingly. This avoids having to make a method call all the way to the server, catch an exception and print an exception. DataTransferProtocol.java - version changed from 17 to 19 The changes in this are compatible. The version change is unnecessary. Given that the protocol changes does not affect the client, I propose reverting the changes to version number in the protocols.

          People

          • Assignee:
            Suresh Srinivas
            Reporter:
            Suresh Srinivas
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development