Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1822

Editlog opcodes overlap between 20 security and later releases

    Details

    • Hadoop Flags:
      Reviewed

      Description

      Same opcode are used for different operations between 0.20.security, 0.22 and 0.23. This results in failure to load editlogs on later release, especially during upgrades.

      1. HDFS-1822.patch
        7 kB
        Suresh Srinivas
      2. HDFS-1822.trunk.patch
        7 kB
        Suresh Srinivas
      3. HDFS-1822.rel22.patch
        6 kB
        Suresh Srinivas

        Issue Links

          Activity

          Hide
          Suresh Srinivas added a comment -

          20.security (LAYOUT_VERSION = -19) adds the following opcodes:
          private static final byte OP_GET_DELEGATION_TOKEN = 15; //new delegation token
          private static final byte OP_RENEW_DELEGATION_TOKEN = 16; //renew delegation token
          private static final byte OP_CANCEL_DELEGATION_TOKEN = 17; //cancel delegation token
          private static final byte OP_UPDATE_MASTER_KEY = 18; //update master key

          21 (layout version = -24) adds the following opcodes:
          private static final byte OP_RENAME = 15; // new rename
          private static final byte OP_CONCAT_DELETE = 16; // concat files.
          private static final byte OP_SYMLINK = 17; // a symbolic link
          private static final byte OP_GET_DELEGATION_TOKEN = 18; //new delegation token
          private static final byte OP_RENEW_DELEGATION_TOKEN = 19; //renew delegation token
          private static final byte OP_CANCEL_DELEGATION_TOKEN = 20; //cancel delegation token
          private static final byte OP_UPDATE_MASTER_KEY = 21; //update master key

          22 (layout version = -27) and trunk (layout version > -27) adds the following opcodes:
          public static final byte OP_RENAME = 15; // new rename
          public static final byte OP_CONCAT_DELETE = 16; // concat files.
          public static final byte OP_SYMLINK = 17; // a symbolic link
          public static final byte OP_GET_DELEGATION_TOKEN = 18; //new delegation token
          public static final byte OP_RENEW_DELEGATION_TOKEN = 19; //renew delegation token
          public static final byte OP_CANCEL_DELEGATION_TOKEN = 20; //cancel delegation token
          public static final byte OP_UPDATE_MASTER_KEY = 21; //update master key

          Conflicts in the opcodes:

          1. Opcode 15 means OP_GET_DELEGATION_TOKEN on 20.s and OP_RENAME on later releases
          2. Opcode 16 means OP_RENEW_DELEGATION_TOKEN on 20.s and OP_CONCAT_DELETE on later releases
          3. Opcode 17 means OP_CANCEL_DELEGATION_TOKEN on 20.s and OP_SYMLINK on later releases
          4. Opcode 18 means OP_UPDATE_MASTER_KEY on 20.s and OP_GET_DELEGATION_TOKEN on later releases

          We need to support the following upgrades:

          1. 20.s to 22 or later releases
            • The opcode conflict here makes consuming editlogs impossible
          2. Need to support upgrade from 21 to 22 or later releases

          I am proposing handling these conflicts as follows, while consuming editlogs:

          1. If layout version is > -24 then it is 20 version, use the definition as shown in 20.security
          2. If layout version is <= -24 use the definition from 21 onwards.

          This is messy way of doing it. But I do not see any way around it.

          In future:

          • We need to make sure, any op code added is added in the trunk first, before adding it in older releases. Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases.
          Show
          Suresh Srinivas added a comment - 20.security (LAYOUT_VERSION = -19) adds the following opcodes: private static final byte OP_GET_DELEGATION_TOKEN = 15; //new delegation token private static final byte OP_RENEW_DELEGATION_TOKEN = 16; //renew delegation token private static final byte OP_CANCEL_DELEGATION_TOKEN = 17; //cancel delegation token private static final byte OP_UPDATE_MASTER_KEY = 18; //update master key 21 (layout version = -24) adds the following opcodes: private static final byte OP_RENAME = 15; // new rename private static final byte OP_CONCAT_DELETE = 16; // concat files. private static final byte OP_SYMLINK = 17; // a symbolic link private static final byte OP_GET_DELEGATION_TOKEN = 18; //new delegation token private static final byte OP_RENEW_DELEGATION_TOKEN = 19; //renew delegation token private static final byte OP_CANCEL_DELEGATION_TOKEN = 20; //cancel delegation token private static final byte OP_UPDATE_MASTER_KEY = 21; //update master key 22 (layout version = -27) and trunk (layout version > -27) adds the following opcodes: public static final byte OP_RENAME = 15; // new rename public static final byte OP_CONCAT_DELETE = 16; // concat files. public static final byte OP_SYMLINK = 17; // a symbolic link public static final byte OP_GET_DELEGATION_TOKEN = 18; //new delegation token public static final byte OP_RENEW_DELEGATION_TOKEN = 19; //renew delegation token public static final byte OP_CANCEL_DELEGATION_TOKEN = 20; //cancel delegation token public static final byte OP_UPDATE_MASTER_KEY = 21; //update master key Conflicts in the opcodes: Opcode 15 means OP_GET_DELEGATION_TOKEN on 20.s and OP_RENAME on later releases Opcode 16 means OP_RENEW_DELEGATION_TOKEN on 20.s and OP_CONCAT_DELETE on later releases Opcode 17 means OP_CANCEL_DELEGATION_TOKEN on 20.s and OP_SYMLINK on later releases Opcode 18 means OP_UPDATE_MASTER_KEY on 20.s and OP_GET_DELEGATION_TOKEN on later releases We need to support the following upgrades: 20.s to 22 or later releases The opcode conflict here makes consuming editlogs impossible Need to support upgrade from 21 to 22 or later releases I am proposing handling these conflicts as follows, while consuming editlogs: If layout version is > -24 then it is 20 version, use the definition as shown in 20.security If layout version is <= -24 use the definition from 21 onwards. This is messy way of doing it. But I do not see any way around it. In future: We need to make sure, any op code added is added in the trunk first, before adding it in older releases. Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases.
          Hide
          dhruba borthakur added a comment -

          > We need to make sure, any op code added is added in the trunk first, before adding it in older releases

          Have we made any apache release off 0.20.security?

          Show
          dhruba borthakur added a comment - > We need to make sure, any op code added is added in the trunk first, before adding it in older releases Have we made any apache release off 0.20.security?
          Hide
          Suresh Srinivas added a comment -

          > Have we made any apache release off 0.20.security?
          Yes 0.20-security.203 release is already available outside. This is deployed in Yahoo and also CDH releases use this change.

          Show
          Suresh Srinivas added a comment - > Have we made any apache release off 0.20.security? Yes 0.20-security.203 release is already available outside. This is deployed in Yahoo and also CDH releases use this change.
          Hide
          Suresh Srinivas added a comment -

          Changes:

          1. Implemented changes proposed in my previous comment
          2. Changed TestEditLog to new junit4 framework, along with the addition of testcase.
          Show
          Suresh Srinivas added a comment - Changes: Implemented changes proposed in my previous comment Changed TestEditLog to new junit4 framework, along with the addition of testcase.
          Hide
          Allen Wittenauer added a comment -

          Have we made any apache release off 0.20.security?

          No, Apache has not made a release off this branch. The fact that other distributions have is irrelevant.

          Show
          Allen Wittenauer added a comment - Have we made any apache release off 0.20.security? No, Apache has not made a release off this branch. The fact that other distributions have is irrelevant.
          Hide
          Suresh Srinivas added a comment -

          > Apache has not made a release off this branch.
          Sure. Still there are users using 20 branches of Hadoop, even though it is not Apache released! I think this is an important change for the community.

          Show
          Suresh Srinivas added a comment - > Apache has not made a release off this branch. Sure. Still there are users using 20 branches of Hadoop, even though it is not Apache released! I think this is an important change for the community.
          Hide
          dhruba borthakur added a comment -

          > No, Apache has not made a release off this branch. The fact that other distributions have is irrelevant.

          hi suresh, I have to agree with allen on this one. It appears unlikely that this patch can be committed to apache hadoop trunk.

          Show
          dhruba borthakur added a comment - > No, Apache has not made a release off this branch. The fact that other distributions have is irrelevant. hi suresh, I have to agree with allen on this one. It appears unlikely that this patch can be committed to apache hadoop trunk.
          Hide
          Suresh Srinivas added a comment -

          The code in 21 and 22 comes from 20.security code promotion. When the code was promoted, it is not merged correctly. Additionally 20.s and release based on that are coming out soon.

          What is the problem in adding handling of this in trunk?

          Show
          Suresh Srinivas added a comment - The code in 21 and 22 comes from 20.security code promotion. When the code was promoted, it is not merged correctly. Additionally 20.s and release based on that are coming out soon. What is the problem in adding handling of this in trunk?
          Hide
          Allen Wittenauer added a comment -

          To quote you earlier:

          Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases.

          Trunk is the source of truth. We have a situation where we have a bunch of forks and an unreleased branch that conflict with trunk. Apache has no authority over the forks. It does have authority over the branch. Branches are generally considered to be subserviant to trunk.

          Trunk, being the source of truth, provides the definition for these opcodes. The forks+branches are incorrect, even if they were released first/in more used/whatever. That's the risks related to forking and those folks got burned. Any patches made need to happen in the unreleased branches and any forks that want to follow Apache, not on trunk.

          Now, if these opcodes had not already been part of a release (0.21), I think I'd be more sympathetic to an alleged "bad merge". But these opcodes are out in the wild in an official Apache branded release. That makes them official.

          Show
          Allen Wittenauer added a comment - To quote you earlier: Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases. Trunk is the source of truth. We have a situation where we have a bunch of forks and an unreleased branch that conflict with trunk. Apache has no authority over the forks. It does have authority over the branch. Branches are generally considered to be subserviant to trunk. Trunk, being the source of truth, provides the definition for these opcodes. The forks+branches are incorrect, even if they were released first/in more used/whatever. That's the risks related to forking and those folks got burned. Any patches made need to happen in the unreleased branches and any forks that want to follow Apache, not on trunk. Now, if these opcodes had not already been part of a release (0.21), I think I'd be more sympathetic to an alleged "bad merge". But these opcodes are out in the wild in an official Apache branded release. That makes them official.
          Hide
          Allen Wittenauer added a comment -

          Oh, and just so it is official: -1.

          Show
          Allen Wittenauer added a comment - Oh, and just so it is official: -1.
          Hide
          dhruba borthakur added a comment -

          > What is the problem in adding handling of this in trunk?

          There were no Apache releases made off the Security branch. So, i would rather fix the security branch and not make this change to trunk, isn't it?

          Show
          dhruba borthakur added a comment - > What is the problem in adding handling of this in trunk? There were no Apache releases made off the Security branch. So, i would rather fix the security branch and not make this change to trunk, isn't it?
          Hide
          Suresh Srinivas added a comment -

          I think this will affect users of Hadoop. Your definition of Hadoop users seems to be only the folks using Apache released software! I was trying to make this change to ensure smooth progression to 22 and 23 for user community. I will close this bug, as I see -1.

          Show
          Suresh Srinivas added a comment - I think this will affect users of Hadoop. Your definition of Hadoop users seems to be only the folks using Apache released software! I was trying to make this change to ensure smooth progression to 22 and 23 for user community. I will close this bug, as I see -1.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > Oh, and just so it is official: -1.

          Hi Allen, could you provide an alternative proposal or a detailed explanation? It is not clear to me although I have some clue from your previous comments.

          Below is quoted from the "Decision Making" section in http://www.apache.org/foundation/how-it-works.html#meritocracy

          -1 – a negative vote

          The rules require that a negative vote includes an alternative proposal or a detailed explanation of the reasons for the negative vote.

          Show
          Tsz Wo Nicholas Sze added a comment - > Oh, and just so it is official: -1. Hi Allen, could you provide an alternative proposal or a detailed explanation? It is not clear to me although I have some clue from your previous comments. Below is quoted from the "Decision Making" section in http://www.apache.org/foundation/how-it-works.html#meritocracy -1 – a negative vote The rules require that a negative vote includes an alternative proposal or a detailed explanation of the reasons for the negative vote.
          Hide
          Allen Wittenauer added a comment -

          The alternative, as previously mentioned, was to patch the branch of any upcoming Apache Hadoop release with the bad opcode to match trunk.

          Show
          Allen Wittenauer added a comment - The alternative, as previously mentioned, was to patch the branch of any upcoming Apache Hadoop release with the bad opcode to match trunk.
          Hide
          Suresh Srinivas added a comment -

          There are releases using this in production. These releases are not off of Apache. We are making explicit effort to make Apache the place for hadoop releases. In this transition, unfortunately users have deployed other distributions. The cost of fixing this is small, the patch is really not significant and it provides upgrade path for users to move to official Apache Hadoop releases.

          I understand that this is not a problem for LinkedIn. But it is for other folks. If you were in that boat, not sure how you would have voted on this issue.

          In this kind of voting, I think the community suffers. I accept the -1 and withdraw my changes.

          Show
          Suresh Srinivas added a comment - There are releases using this in production. These releases are not off of Apache. We are making explicit effort to make Apache the place for hadoop releases. In this transition, unfortunately users have deployed other distributions. The cost of fixing this is small , the patch is really not significant and it provides upgrade path for users to move to official Apache Hadoop releases. I understand that this is not a problem for LinkedIn. But it is for other folks. If you were in that boat, not sure how you would have voted on this issue. In this kind of voting, I think the community suffers. I accept the -1 and withdraw my changes.
          Hide
          Suresh Srinivas added a comment -

          There are releases using this in production. These releases are not off of Apache. We are making explicit effort to make Apache the place for hadoop releases. In this transition, unfortunately users have deployed other distributions. The cost of fixing this is small, the patch is really not significant and it provides upgrade path for users to move to official Apache Hadoop releases.

          I understand that this is not a problem for LinkedIn. But it is for other folks. If you were in that boat, not sure how you would have voted on this issue.

          In this kind of voting, I think the community suffers. I accept the -1 and withdraw my changes.

          Show
          Suresh Srinivas added a comment - There are releases using this in production. These releases are not off of Apache. We are making explicit effort to make Apache the place for hadoop releases. In this transition, unfortunately users have deployed other distributions. The cost of fixing this is small , the patch is really not significant and it provides upgrade path for users to move to official Apache Hadoop releases. I understand that this is not a problem for LinkedIn. But it is for other folks. If you were in that boat, not sure how you would have voted on this issue. In this kind of voting, I think the community suffers. I accept the -1 and withdraw my changes.
          Hide
          Suresh Srinivas added a comment -

          Sorry for the double post. I am withdrawing it only once

          Show
          Suresh Srinivas added a comment - Sorry for the double post. I am withdrawing it only once
          Hide
          Suresh Srinivas added a comment -

          BTW this pattern we found out accidentally. It could have happened among official branches any way.

          Show
          Suresh Srinivas added a comment - BTW this pattern we found out accidentally. It could have happened among official branches any way.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > The alternative, as previously mentioned, was to patch the branch of any upcoming Apache Hadoop release with the bad opcode to match trunk.

          Allen, thanks for providing the solution. What are the drawbacks for committing Suresh's patch? Why can't we do both, i.e. fixing the bad opcode in 0.20-security and committing Suresh's patch?

          Show
          Tsz Wo Nicholas Sze added a comment - > The alternative, as previously mentioned, was to patch the branch of any upcoming Apache Hadoop release with the bad opcode to match trunk. Allen, thanks for providing the solution. What are the drawbacks for committing Suresh's patch? Why can't we do both, i.e. fixing the bad opcode in 0.20-security and committing Suresh's patch?
          Hide
          Allen Wittenauer added a comment -

          Let's say I'm using Fred's Bargain Basement Distribution of Apache Hadoop. I'm using it in production which means it is Important and enterprise-y and stuff. This version of Hadoop has underscores in all the RPC method calls. I notice that Apache doesn't have underscores, but I want to switch to this version without breaking my clients. I submit a patch that converts all the underscore calls to non-underscores. Should we as Apache committers accept this patch?

          Show
          Allen Wittenauer added a comment - Let's say I'm using Fred's Bargain Basement Distribution of Apache Hadoop. I'm using it in production which means it is Important and enterprise-y and stuff. This version of Hadoop has underscores in all the RPC method calls. I notice that Apache doesn't have underscores, but I want to switch to this version without breaking my clients. I submit a patch that converts all the underscore calls to non-underscores. Should we as Apache committers accept this patch?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Hey Allen, we are talking about Apache Hadoop trunk and Apache Hadoop 0.20-security. Fred's Bargain Basement Distribution of Apache Hadoop is out of our picture.

          For your question:

          • Suppose it is neither an incompatible change nor introducing other bad behaviors. Suppose also the original patch was submitted to Apache Hadoop. We should accept it.
          • On the other hand, if a patch is fixing a private distribution, we should not accept it.
          Show
          Tsz Wo Nicholas Sze added a comment - Hey Allen, we are talking about Apache Hadoop trunk and Apache Hadoop 0.20-security. Fred's Bargain Basement Distribution of Apache Hadoop is out of our picture. For your question: Suppose it is neither an incompatible change nor introducing other bad behaviors. Suppose also the original patch was submitted to Apache Hadoop. We should accept it. On the other hand, if a patch is fixing a private distribution, we should not accept it.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases.

          Hey Suresh, version defines format. It is sufficient to have opcode consistency within a version. Making opcodes universally uniquely is unnecessary.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... Trunk should be the source of truth, ensuring the opcodes are chosen uniquely across different releases. Hey Suresh, version defines format. It is sufficient to have opcode consistency within a version. Making opcodes universally uniquely is unnecessary.
          Hide
          Allen Wittenauer added a comment -

          Show me where the tarball is of Apache Hadoop 0.20-security because I don't see it on the releases page ( http://hadoop.apache.org/common/releases.html ). Not there? Doesn't that make it as legit as Fred's Bargain Basement Distribution of Apache Hadoop?

          Suppose it is neither an incompatible change nor introducing other bad behaviors. Suppose also the original patch was submitted to Apache Hadoop. We should accept it.

          On the other hand, if a patch is fixing a private distribution, we should not accept it.

          But here's the problem: this patch and my example cover both of these cases. This patch is not incompatible with any Apache release, but it does fix problems in private distributions. So now what?

          Let's assume that 0.20.203/204 makes it out the door (and, for the record, that branch has a lot more problems than just this issue...). What happens if someone upgrades from 0.20.203 to 0.21? (Trust me, people will.)

          Side question for someone more skilled than I: Could someone generate a JIRA list that tracks the changes in the opcodes? I think they would be useful to look at, especially since it has been alleged that the merge was 'bad'. What made the merge go wonky?

          Show
          Allen Wittenauer added a comment - Show me where the tarball is of Apache Hadoop 0.20-security because I don't see it on the releases page ( http://hadoop.apache.org/common/releases.html ). Not there? Doesn't that make it as legit as Fred's Bargain Basement Distribution of Apache Hadoop? Suppose it is neither an incompatible change nor introducing other bad behaviors. Suppose also the original patch was submitted to Apache Hadoop. We should accept it. On the other hand, if a patch is fixing a private distribution, we should not accept it. But here's the problem: this patch and my example cover both of these cases. This patch is not incompatible with any Apache release, but it does fix problems in private distributions. So now what? Let's assume that 0.20.203/204 makes it out the door (and, for the record, that branch has a lot more problems than just this issue...). What happens if someone upgrades from 0.20.203 to 0.21? (Trust me, people will.) Side question for someone more skilled than I: Could someone generate a JIRA list that tracks the changes in the opcodes? I think they would be useful to look at, especially since it has been alleged that the merge was 'bad'. What made the merge go wonky?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > Show me where the tarball ...

          Why a tarball suddenly becomes so important? I am surprised. Do you count 0.22 or 0.23? The tarballs are not out there yet.

          Once a release is out, we are not allow to make incompatible changes but changing opcodes is fine as long as the layout version is updated and it is backward compatible. As mentioned earlier, opcode is not necessarily unique across different layout versions.

          > But here's the problem: this patch and my example cover both of these cases. ...

          I intended to make two cases mutually exclusive. The second case should be: "if a patch is only fixing a private distribution but not Apache Hadoop trunk/branches". Sorry that I did not make it clear.

          > ... Could someone generate a JIRA list that tracks the changes in the opcodes? ...

          It can be done by "svn log".

          Show
          Tsz Wo Nicholas Sze added a comment - > Show me where the tarball ... Why a tarball suddenly becomes so important? I am surprised. Do you count 0.22 or 0.23? The tarballs are not out there yet. Once a release is out, we are not allow to make incompatible changes but changing opcodes is fine as long as the layout version is updated and it is backward compatible. As mentioned earlier, opcode is not necessarily unique across different layout versions. > But here's the problem: this patch and my example cover both of these cases. ... I intended to make two cases mutually exclusive. The second case should be: "if a patch is only fixing a private distribution but not Apache Hadoop trunk/branches". Sorry that I did not make it clear. > ... Could someone generate a JIRA list that tracks the changes in the opcodes? ... It can be done by "svn log".
          Hide
          Allen Wittenauer added a comment -

          Why a tarball suddenly becomes so important? I am surprised. Do you count 0.22 or 0.23? The tarballs are not out there yet.

          Why are you surprised by this? Of course an official Apache tarball is important. How do you think users install Apache releases?

          No, 0.20.4, 0.21.1, 0.22 and 0.23 aren't real releases (yet?) either. Until a batch of code goes through the Apache release process, they aren't Apache branded distributions.

          Once a release is out, we are not allow to make incompatible changes but changing opcodes is fine as long as the layout version is updated and it is backward compatible. As mentioned earlier, opcode is not necessarily unique across different layout versions.

          Have we ever declared that anywhere? What happens if FBBDoAH has a custom opscode that is different than Apache's? Would we accept a patch to parse it just like this patch proposes?

          In reality, doesn't this ultimately mean that we should not be performing upgrades if an editslog exists? In other words, the previous editslog should be fully digested before the next release is put in place. (This is also a great practice anyway, so reinforcing that would be good.) This seems like a much more acceptable idea rather than opening the door for every random fork compatibility patch.

          "if a patch is only fixing a private distribution but not Apache Hadoop trunk/branches"

          But there is nothing wrong with trunk. In fact, these opcodes were released in 0.21. Shouldn't this patch be against the unreleased branch to bring it in line with trunk? Or do we think that unreleased branches have more importance than trunk?

          Show
          Allen Wittenauer added a comment - Why a tarball suddenly becomes so important? I am surprised. Do you count 0.22 or 0.23? The tarballs are not out there yet. Why are you surprised by this? Of course an official Apache tarball is important. How do you think users install Apache releases? No, 0.20.4, 0.21.1, 0.22 and 0.23 aren't real releases (yet?) either. Until a batch of code goes through the Apache release process, they aren't Apache branded distributions. Once a release is out, we are not allow to make incompatible changes but changing opcodes is fine as long as the layout version is updated and it is backward compatible. As mentioned earlier, opcode is not necessarily unique across different layout versions. Have we ever declared that anywhere? What happens if FBBDoAH has a custom opscode that is different than Apache's? Would we accept a patch to parse it just like this patch proposes? In reality, doesn't this ultimately mean that we should not be performing upgrades if an editslog exists? In other words, the previous editslog should be fully digested before the next release is put in place. (This is also a great practice anyway, so reinforcing that would be good.) This seems like a much more acceptable idea rather than opening the door for every random fork compatibility patch. "if a patch is only fixing a private distribution but not Apache Hadoop trunk/branches" But there is nothing wrong with trunk. In fact, these opcodes were released in 0.21. Shouldn't this patch be against the unreleased branch to bring it in line with trunk? Or do we think that unreleased branches have more importance than trunk?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... FBBDoAH has a custom opscode ...

          Since it is not in Apache, we could simply reject the patch.

          > In reality, doesn't this ultimately mean that we should not be performing upgrades if an editslog exists? ...

          No. I suspect that there are some misunderstanding: By compatible change, it means new codes can handle old edits.

          Suresh's patch is a compatible change: It just add a functionality to read 0.20-security edits.

          Again, layout version defines format, i.e. opcodes. Therefore, opcodes could be different across layout versions.

          > But there is nothing wrong with trunk. ...

          After the patch, it is still nothing wrong with trunk.

          Allen, you are stopping a feature, which is the ability to read 0.20-security edits, but not stopping an incompatible change. Stopping someone's patch is easy but making contribution becomes hard.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... FBBDoAH has a custom opscode ... Since it is not in Apache, we could simply reject the patch. > In reality, doesn't this ultimately mean that we should not be performing upgrades if an editslog exists? ... No. I suspect that there are some misunderstanding: By compatible change, it means new codes can handle old edits. Suresh's patch is a compatible change: It just add a functionality to read 0.20-security edits. Again, layout version defines format, i.e. opcodes. Therefore, opcodes could be different across layout versions. > But there is nothing wrong with trunk. ... After the patch, it is still nothing wrong with trunk. Allen, you are stopping a feature, which is the ability to read 0.20-security edits, but not stopping an incompatible change. Stopping someone's patch is easy but making contribution becomes hard.
          Hide
          Konstantin Shvachko added a comment -

          Proposed change seems odd. I understand there was a mistake made while introducing these ops to 0.20s. So correcting this mistake by introducing a work around into 0.22 and using it forever is bad.
          I did not exactly understood what was vetoed.

          I think a nicer way to handle this is this.
          Let's upgrade all four branches at once.
          h-0.20.s LV -19 -> -29
          h-0.21 LV -27 -> -30
          h-0.22 LV -28 -> -31
          h-0.23 LV -27 -> -32

          h-0.20.s with LV -29 will provide conversion from the -19 ops to the right ones.
          LV -19 should be banned for h-0.21,22,23

          Will that work?

          Show
          Konstantin Shvachko added a comment - Proposed change seems odd. I understand there was a mistake made while introducing these ops to 0.20s. So correcting this mistake by introducing a work around into 0.22 and using it forever is bad. I did not exactly understood what was vetoed. I think a nicer way to handle this is this. Let's upgrade all four branches at once. h-0.20.s LV -19 -> -29 h-0.21 LV -27 -> -30 h-0.22 LV -28 -> -31 h-0.23 LV -27 -> -32 h-0.20.s with LV -29 will provide conversion from the -19 ops to the right ones. LV -19 should be banned for h-0.21,22,23 Will that work?
          Hide
          Konstantin Shvachko added a comment -

          Sorry got the layout versions wrong in the previous post. Here is the correct table:

          h-0.20.s   LV -19 -> -29
          h-0.21     LV -24 -> -30
          h-0.22     LV -27 -> -31
          h-0.23     LV -28 -> -32
          
          Show
          Konstantin Shvachko added a comment - Sorry got the layout versions wrong in the previous post. Here is the correct table: h-0.20.s LV -19 -> -29 h-0.21 LV -24 -> -30 h-0.22 LV -27 -> -31 h-0.23 LV -28 -> -32
          Hide
          Allen Wittenauer added a comment -

          Proposed change seems odd. I understand there was a mistake made while introducing these ops to 0.20s. So correcting this mistake by introducing a work around into 0.22 and using it forever is bad.

          I did not exactly understood what was vetoed.

          To clarify: Correcting this mistake by adding translation code into trunk is horrific and, as you pointed out, we'll be stuck with it forever. It also doesn't fix the problem for someone going from non-Apache 0.20 -> 0.21.

          I'm also very much opposed to a non-Apache release forcing this sort of change. Yes, I understand that people run code from branches all the time. That's a risk/reward calculation that organizations need to make. In this case, the risk came true: they aren't compatible anymore.

          In any case, I'm much more in favor of:

          a) Fixing the Apache branches to use the proper opcodes.
          b) Declaring that one must have a fully processed editslog prior to upgrade. This is a recommended practice anyway, so I don't see the harm in making it official.

          If the group really feels we need to protect Apache releases from non-Apache releases, then we have to decide what to do about 0.21. Fixing this in trunk won't protect those users.

          Show
          Allen Wittenauer added a comment - Proposed change seems odd. I understand there was a mistake made while introducing these ops to 0.20s. So correcting this mistake by introducing a work around into 0.22 and using it forever is bad. I did not exactly understood what was vetoed. To clarify: Correcting this mistake by adding translation code into trunk is horrific and, as you pointed out, we'll be stuck with it forever. It also doesn't fix the problem for someone going from non-Apache 0.20 -> 0.21. I'm also very much opposed to a non-Apache release forcing this sort of change. Yes, I understand that people run code from branches all the time. That's a risk/reward calculation that organizations need to make. In this case, the risk came true: they aren't compatible anymore. In any case, I'm much more in favor of: a) Fixing the Apache branches to use the proper opcodes. b) Declaring that one must have a fully processed editslog prior to upgrade. This is a recommended practice anyway, so I don't see the harm in making it official. If the group really feels we need to protect Apache releases from non-Apache releases, then we have to decide what to do about 0.21. Fixing this in trunk won't protect those users.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          @Allen, what non-Apache are you talking about? The Fred's Bargain Basement Distribution of Apache Hadoop you mentioned previously?

          Show
          Tsz Wo Nicholas Sze added a comment - @Allen, what non-Apache are you talking about? The Fred's Bargain Basement Distribution of Apache Hadoop you mentioned previously?
          Hide
          Allen Wittenauer added a comment -

          Yahoo!'s distribution and Cloduera's distribution.

          Show
          Allen Wittenauer added a comment - Yahoo!'s distribution and Cloduera's distribution.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... In fact, these opcodes were released in 0.21. ...

          I forgot to say that opcodes can be changed in anytime as long as it is backward compatible, i.e. the new software can handle the old edit logs since opcodes are private unstable APIs; please see FSEditLogOpCodes.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... In fact, these opcodes were released in 0.21. ... I forgot to say that opcodes can be changed in anytime as long as it is backward compatible, i.e. the new software can handle the old edit logs since opcodes are private unstable APIs; please see FSEditLogOpCodes .
          Hide
          Konstantin Shvachko added a comment -

          I believe my approach avoids opcode conversion in downstream releases. Anybody wants to comment on that?

          Show
          Konstantin Shvachko added a comment - I believe my approach avoids opcode conversion in downstream releases. Anybody wants to comment on that?
          Hide
          Allen Wittenauer added a comment -

          the new software can handle the old edit logs since opcodes are private unstable APIs

          You can't have it both ways. If new software understands the old opcodes that essentially declares them stable.

          I believe my approach avoids opcode conversion in downstream releases.

          Won't the editslog still need to get processed during an upgrade?

          Show
          Allen Wittenauer added a comment - the new software can handle the old edit logs since opcodes are private unstable APIs You can't have it both ways. If new software understands the old opcodes that essentially declares them stable. I believe my approach avoids opcode conversion in downstream releases. Won't the editslog still need to get processed during an upgrade?
          Hide
          Konstantin Shvachko added a comment -

          I propose to first upgrade h-0.20s from -19 to -29. This will convert opcodes to the right ones for h-0.20s. Then you can upgrade it to h-0.21,22,23, which will have higher LV -30,-31,-32 respectively (according to the table above).
          h-0.21,22,23 will also prohibit to upgrade from -19, asking to upgrade to -29 first.
          But there is no confusing opcode conversions in releases h-0.21,22,23 and in the future. This will all be buried and forgotten in h-0.20s.

          Show
          Konstantin Shvachko added a comment - I propose to first upgrade h-0.20s from -19 to -29. This will convert opcodes to the right ones for h-0.20s. Then you can upgrade it to h-0.21,22,23, which will have higher LV -30,-31,-32 respectively (according to the table above). h-0.21,22,23 will also prohibit to upgrade from -19, asking to upgrade to -29 first. But there is no confusing opcode conversions in releases h-0.21,22,23 and in the future. This will all be buried and forgotten in h-0.20s.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > You can't have it both ways. If new software understands the old opcodes that essentially declares them stable.

          Allen, it seems to me that you do not understand the existing implementation although you have -1'ed on this. I am sorry to see it.

          Show
          Tsz Wo Nicholas Sze added a comment - > You can't have it both ways. If new software understands the old opcodes that essentially declares them stable. Allen, it seems to me that you do not understand the existing implementation although you have -1'ed on this. I am sorry to see it.
          Hide
          dhruba borthakur added a comment -

          I think Konstantin's proposal seems to be one way out of this problem, this keeps branch-specific hackery confined to the branch while keeping the trunk code clean and streamlined.

          Show
          dhruba borthakur added a comment - I think Konstantin's proposal seems to be one way out of this problem, this keeps branch-specific hackery confined to the branch while keeping the trunk code clean and streamlined.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I also think that Konstantin's proposal is reasonable. The only drawback is requiring an extra upgrade.

          Show
          Tsz Wo Nicholas Sze added a comment - I also think that Konstantin's proposal is reasonable. The only drawback is requiring an extra upgrade.
          Hide
          Suresh Srinivas added a comment -

          It took me some time to recover and get back to this jira . I like Konstantin's proposal. Here is some changes and additional clarification:

          203 and releases before that have opcode conflicts. Along similar lines as konstantin, I propose:

          1. Currently the LV of trunk is -30. Change layout versions as follows:
            • 203 to -31
            • 204 to -32
            • 21 will remain -24.
            • 22 to -33
            • Reserve all these layout versions in trunk FSConstants.java and bump trunk LV to -34.
            • In 204 add code to handle opcode conflicts. This code will remain in 2xx release only.
          2. Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty, to say "must upgrade to 204 or must save namespace before upgrade." release first.
          3. With this the upgrades will work as follows:
            • 203 and later 2xx releases -> 21 fails, since we do not support this upgrade path.
            • 203 -> 22, 23 will fail with an error indicating must upgrade to 204 first, if edit logs are not empty.
            • 204 -> 22, 23 will work.
            • 21 -> 22, 23 and 22 -> 23 will work.
          4. Disallow 21 to 203 upgrade by checking for LV version -24 with an error "Upgrade not supported".
          Show
          Suresh Srinivas added a comment - It took me some time to recover and get back to this jira . I like Konstantin's proposal. Here is some changes and additional clarification: 203 and releases before that have opcode conflicts. Along similar lines as konstantin, I propose: Currently the LV of trunk is -30. Change layout versions as follows: 203 to -31 204 to -32 21 will remain -24. 22 to -33 Reserve all these layout versions in trunk FSConstants.java and bump trunk LV to -34. In 204 add code to handle opcode conflicts. This code will remain in 2xx release only. Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty, to say "must upgrade to 204 or must save namespace before upgrade." release first. With this the upgrades will work as follows: 203 and later 2xx releases -> 21 fails, since we do not support this upgrade path. 203 -> 22, 23 will fail with an error indicating must upgrade to 204 first, if edit logs are not empty. 204 -> 22, 23 will work. 21 -> 22, 23 and 22 -> 23 will work. Disallow 21 to 203 upgrade by checking for LV version -24 with an error "Upgrade not supported".
          Hide
          Owen O'Malley added a comment -

          Looks good, Suresh

          Show
          Owen O'Malley added a comment - Looks good, Suresh
          Hide
          Nigel Daley added a comment -

          Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty,...

          Doesn't seem like this "keeps branch-specific hackery confined to the branch".

          Show
          Nigel Daley added a comment - Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty,... Doesn't seem like this "keeps branch-specific hackery confined to the branch".
          Hide
          Suresh Srinivas added a comment -

          > Doesn't seem like this "keeps branch-specific hackery confined to the branch".
          It does. We no longer need code for conflicting opcodes in later releases. The new check that is being added is for version compatibility.

          Show
          Suresh Srinivas added a comment - > Doesn't seem like this "keeps branch-specific hackery confined to the branch". It does. We no longer need code for conflicting opcodes in later releases. The new check that is being added is for version compatibility.
          Hide
          Suresh Srinivas added a comment -

          Changes:

          1. Makes proposed changes in trunk
          2. Updated list of layout version in ImageLoaderCurrent and EditsLoaderCurrent.
          Show
          Suresh Srinivas added a comment - Changes: Makes proposed changes in trunk Updated list of layout version in ImageLoaderCurrent and EditsLoaderCurrent.
          Hide
          Suresh Srinivas added a comment -

          Final solution (changed based on discussions in HDFS-1842):

          1. Currently the LV of trunk is -30. Change layout versions as follows:
            • 203 to -31
            • 204 to -32
            • 21 will remain -24.
            • 22 to -33
            • Reserve all these layout versions in trunk FSConstants.java and bump trunk LV to -34.
            • In 204 add code to handle opcode conflicts. This code will remain in 2xx release only.
          2. Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty, to say "must restart namenode in older release to make editlog empty".
          3. With this the upgrades will work as follows:
            • 203 and later 2xx releases -> 21 fails, since we do not support this upgrade path.
            • 203 -> 22, 23 will fail with an error indicating must restart in older release to make editlog empty.
            • 204 -> 22, 23 will work, 21 -> 22, 23 and 22 -> 23 will work.
          4. Disallow 21 to 203 upgrade by checking for LV version -24 with an error "Upgrade not supported".
          Show
          Suresh Srinivas added a comment - Final solution (changed based on discussions in HDFS-1842 ): Currently the LV of trunk is -30. Change layout versions as follows: 203 to -31 204 to -32 21 will remain -24. 22 to -33 Reserve all these layout versions in trunk FSConstants.java and bump trunk LV to -34. In 204 add code to handle opcode conflicts. This code will remain in 2xx release only. Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx releases and editlog is not empty, to say "must restart namenode in older release to make editlog empty". With this the upgrades will work as follows: 203 and later 2xx releases -> 21 fails, since we do not support this upgrade path. 203 -> 22, 23 will fail with an error indicating must restart in older release to make editlog empty. 204 -> 22, 23 will work, 21 -> 22, 23 and 22 -> 23 will work. Disallow 21 to 203 upgrade by checking for LV version -24 with an error "Upgrade not supported".
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477326/HDFS-1822.trunk.patch
          against trunk revision 1096010.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
          org.apache.hadoop.hdfs.TestDistributedFileSystem
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477326/HDFS-1822.trunk.patch against trunk revision 1096010. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.TestFileConcurrentReader +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/414//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          The hudson test report cannot be accessed. The failed tests TestNameNodeResourceChecker and TestDistributedFileSystem runs successfully on my local build.

          Show
          Suresh Srinivas added a comment - The hudson test report cannot be accessed. The failed tests TestNameNodeResourceChecker and TestDistributedFileSystem runs successfully on my local build.
          Hide
          Owen O'Malley added a comment -

          +1 on the trunk patch.

          Show
          Owen O'Malley added a comment - +1 on the trunk patch.
          Hide
          Suresh Srinivas added a comment -

          Attaching 22 version of the patch.

          Show
          Suresh Srinivas added a comment - Attaching 22 version of the patch.
          Hide
          Suresh Srinivas added a comment -

          I committed the trunk version of the patch.

          Show
          Suresh Srinivas added a comment - I committed the trunk version of the patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477430/HDFS-1822.rel22.patch
          against trunk revision 1096846.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/418//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477430/HDFS-1822.rel22.patch against trunk revision 1096846. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/418//console This message is automatically generated.
          Hide
          Owen O'Malley added a comment -

          And finally, +1 on the 22 patch. 4 branches down, 0 to go. smile

          Show
          Owen O'Malley added a comment - And finally, +1 on the 22 patch. 4 branches down, 0 to go. smile
          Hide
          Suresh Srinivas added a comment -

          I committed the patch to release 0.22 and trunk.

          Show
          Suresh Srinivas added a comment - I committed the patch to release 0.22 and trunk.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #611 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/611/)
          Fixing the invalid bug number in CHANGES.txt from HDFS-1842 to HDFS-1822.

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #611 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/611/ ) Fixing the invalid bug number in CHANGES.txt from HDFS-1842 to HDFS-1822 .
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #650 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/650/)
          Fixing the invalid bug number in CHANGES.txt from HDFS-1842 to HDFS-1822.

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #650 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/650/ ) Fixing the invalid bug number in CHANGES.txt from HDFS-1842 to HDFS-1822 .
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-22-branch #41 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/41/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-22-branch #41 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/41/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #693 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/693/)
          HDFS-1936. Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball.
          HDFS-1936. Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128534
          Files :

          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128527
          Files :

          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LayoutVersion.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoaderCurrent.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/UpgradeUtilities.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewer.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/protocol/TestLayoutVersion.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #693 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/693/ ) HDFS-1936 . Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball. HDFS-1936 . Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128534 Files : /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128527 Files : /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LayoutVersion.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoaderCurrent.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/UpgradeUtilities.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewer.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/FSConstants.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/protocol/TestLayoutVersion.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #680 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/680/)
          HDFS-1936. Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball.
          HDFS-1936. Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128534
          Files :

          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128527
          Files :

          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LayoutVersion.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoaderCurrent.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/UpgradeUtilities.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewer.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/FSConstants.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/protocol/TestLayoutVersion.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #680 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/680/ ) HDFS-1936 . Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball. HDFS-1936 . Part 1 or 2 - Updating the layout version from HDFS-1822 causes upgrade problems. Committing the required image tar ball. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128534 Files : /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1128527 Files : /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LayoutVersion.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoaderCurrent.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/UpgradeUtilities.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewer.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-22-dfs-dir.tgz /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/FSConstants.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/protocol/TestLayoutVersion.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #685 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/685/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #685 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/685/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #704 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/704/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #704 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/704/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-22-branch #61 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/61/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-22-branch #61 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/61/ )
          Hide
          Owen O'Malley added a comment -

          Closing for 0.20.203.0

          Show
          Owen O'Malley added a comment - Closing for 0.20.203.0

            People

            • Assignee:
              Suresh Srinivas
              Reporter:
              Suresh Srinivas
            • Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development