Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11431

hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 2.8.0, 3.0.0-alpha4
    • Fix Version/s: 2.8.0
    • Component/s: build, hdfs-client
    • Labels:
    • Target Version/s:
    • Release Note:
      Hide
      The hadoop-client POM now includes a leaner hdfs-client, stripping out all the transitive dependencies on JARs only needed for the Hadoop HDFS daemon itself. The specific jars now excluded are: leveldbjni-all, jetty-util, commons-daemon, xercesImpl, netty and servlet-api.

      This should make downstream projects dependent JARs smaller, and avoid version conflict problems with the specific JARs now excluded.

      Applications may encounter build problems if they did depend on these JARs, and which didn't explicitly include them. There are two fixes for this

      * explicitly include the JARs, stating which version of them you want.
      * add a dependency on hadoop-hdfs. For Hadoop 2.8+, this will add the missing dependencies. For builds against older versions of Hadoop, this will be harmless, as hadoop-hdfs and all its dependencies are already pulled in by the hadoop-client POM.
      Show
      The hadoop-client POM now includes a leaner hdfs-client, stripping out all the transitive dependencies on JARs only needed for the Hadoop HDFS daemon itself. The specific jars now excluded are: leveldbjni-all, jetty-util, commons-daemon, xercesImpl, netty and servlet-api. This should make downstream projects dependent JARs smaller, and avoid version conflict problems with the specific JARs now excluded. Applications may encounter build problems if they did depend on these JARs, and which didn't explicitly include them. There are two fixes for this * explicitly include the JARs, stating which version of them you want. * add a dependency on hadoop-hdfs. For Hadoop 2.8+, this will add the missing dependencies. For builds against older versions of Hadoop, this will be harmless, as hadoop-hdfs and all its dependencies are already pulled in by the hadoop-client POM.

      Description

      The hadoop-hdfs-client-2.8.0.jar file does include the ConfiguredFailoverProxyProvider class. This breaks client applications that use this class to communicate with the active NameNode in an HA deployment of HDFS.

        Issue Links

          Activity

          Hide
          Steven Rand Steven Rand added a comment -

          Haohui Mai, tagging you since you've worked on the hadoop-hdfs-client JAR in https://issues.apache.org/jira/browse/HDFS-6200.

          Show
          Steven Rand Steven Rand added a comment - Haohui Mai , tagging you since you've worked on the hadoop-hdfs-client JAR in https://issues.apache.org/jira/browse/HDFS-6200 .
          Hide
          aw Allen Wittenauer added a comment -

          Raising this to a blocker.

          Show
          aw Allen Wittenauer added a comment - Raising this to a blocker.
          Hide
          Steven Rand Steven Rand added a comment -

          I tried the naive approach of simply moving ConfiguredFailoverProxyProvider into hadoop-hdfs-client, but that gets messy quickly due to the other classes in hadoop-hdfs that it imports, and the classes that they import, etc. I imagine that approach can be made to work, but not without a substantial refactor.

          Maybe the best thing to do is to make hadoop-client depend on hadoop-hdfs as suggested by Steve Loughran and others in HDFS-9301?

          Show
          Steven Rand Steven Rand added a comment - I tried the naive approach of simply moving ConfiguredFailoverProxyProvider into hadoop-hdfs-client , but that gets messy quickly due to the other classes in hadoop-hdfs that it imports, and the classes that they import, etc. I imagine that approach can be made to work, but not without a substantial refactor. Maybe the best thing to do is to make hadoop-client depend on hadoop-hdfs as suggested by Steve Loughran and others in HDFS-9301 ?
          Hide
          andrew.wang Andrew Wang added a comment -

          Adding target versions since this looks to be possibly a real blocker.

          Show
          andrew.wang Andrew Wang added a comment - Adding target versions since this looks to be possibly a real blocker.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          +1 for hadoop-client dependencies in the POM; I think my opinions are well known there

          Show
          stevel@apache.org Steve Loughran added a comment - +1 for hadoop-client dependencies in the POM; I think my opinions are well known there
          Hide
          stevel@apache.org Steve Loughran added a comment -

          that said: if the proxy is meant to be used in the client, then it should really make its way to the client JAR sooner or later, maybe post Hadoop 2.8.0

          Show
          stevel@apache.org Steve Loughran added a comment - that said: if the proxy is meant to be used in the client, then it should really make its way to the client JAR sooner or later, maybe post Hadoop 2.8.0
          Hide
          Steven Rand Steven Rand added a comment -

          I've attached a patch which simply makes hadoop-client depend on hadoop-hdfs. I tested it by publishing Hadoop locally and then building Spark against the result of the local publish. The resulting Spark distribution is able to run successfully against an HA HDFS cluster with no changes to Spark, which is not the case as is.

          Show
          Steven Rand Steven Rand added a comment - I've attached a patch which simply makes hadoop-client depend on hadoop-hdfs. I tested it by publishing Hadoop locally and then building Spark against the result of the local publish. The resulting Spark distribution is able to run successfully against an HA HDFS cluster with no changes to Spark, which is not the case as is.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 14m 39s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 9m 2s branch-2.8.0 passed
          +1 compile 0m 10s branch-2.8.0 passed with JDK v1.8.0_121
          +1 compile 0m 12s branch-2.8.0 passed with JDK v1.7.0_121
          +1 mvnsite 0m 17s branch-2.8.0 passed
          +1 mvneclipse 0m 15s branch-2.8.0 passed
          +1 javadoc 0m 10s branch-2.8.0 passed with JDK v1.8.0_121
          +1 javadoc 0m 11s branch-2.8.0 passed with JDK v1.7.0_121
          +1 mvninstall 0m 11s the patch passed
          +1 compile 0m 7s the patch passed with JDK v1.8.0_121
          +1 javac 0m 7s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.7.0_121
          +1 javac 0m 10s the patch passed
          +1 mvnsite 0m 13s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 javadoc 0m 7s the patch passed with JDK v1.8.0_121
          +1 javadoc 0m 8s the patch passed with JDK v1.7.0_121
          +1 unit 0m 9s hadoop-client in the patch passed with JDK v1.7.0_121.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          27m 28s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:5af2af1
          JIRA Issue HDFS-11431
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858810/HDFS-11431-branch-2.8.0.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml
          uname Linux 9d2428986a0a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2.8.0 / b457b9a
          Default Java 1.7.0_121
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
          JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18722/testReport/
          modules C: hadoop-client U: hadoop-client
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18722/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 14m 39s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 9m 2s branch-2.8.0 passed +1 compile 0m 10s branch-2.8.0 passed with JDK v1.8.0_121 +1 compile 0m 12s branch-2.8.0 passed with JDK v1.7.0_121 +1 mvnsite 0m 17s branch-2.8.0 passed +1 mvneclipse 0m 15s branch-2.8.0 passed +1 javadoc 0m 10s branch-2.8.0 passed with JDK v1.8.0_121 +1 javadoc 0m 11s branch-2.8.0 passed with JDK v1.7.0_121 +1 mvninstall 0m 11s the patch passed +1 compile 0m 7s the patch passed with JDK v1.8.0_121 +1 javac 0m 7s the patch passed +1 compile 0m 10s the patch passed with JDK v1.7.0_121 +1 javac 0m 10s the patch passed +1 mvnsite 0m 13s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 javadoc 0m 7s the patch passed with JDK v1.8.0_121 +1 javadoc 0m 8s the patch passed with JDK v1.7.0_121 +1 unit 0m 9s hadoop-client in the patch passed with JDK v1.7.0_121. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 27m 28s Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Issue HDFS-11431 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858810/HDFS-11431-branch-2.8.0.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml uname Linux 9d2428986a0a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8.0 / b457b9a Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18722/testReport/ modules C: hadoop-client U: hadoop-client Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18722/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          One thing to consider here is that a key goal of the split of client/server was to keep all the server classpath stuff out the client.

          Could we get away with declaring hadoop-hdfs as a dependency of hadoop-client, but excluding the transitive dependencies which aren't needed?

          If someone does want everything, they can explicitly declare a dependency on hadoop-hdfs, but if they ask for hadoop-client on its own, they don't get netty, zookeeper, curator, etc, etc.

          I think that'd be a good compromise.

          On that topic, Allen, could we have yetus do a before/after diff on dependencies on any patch which touches a POM? That we can always see what the transitive effects of a dependency change are.

          This is the (bash) alias I use to generate text outputs for diffing locally

          alias mvndep="mvn -T 1C dependency:tree -Dverbose"
          
          Show
          stevel@apache.org Steve Loughran added a comment - One thing to consider here is that a key goal of the split of client/server was to keep all the server classpath stuff out the client. Could we get away with declaring hadoop-hdfs as a dependency of hadoop-client, but excluding the transitive dependencies which aren't needed ? If someone does want everything, they can explicitly declare a dependency on hadoop-hdfs, but if they ask for hadoop-client on its own, they don't get netty, zookeeper, curator, etc, etc. I think that'd be a good compromise. On that topic, Allen, could we have yetus do a before/after diff on dependencies on any patch which touches a POM? That we can always see what the transitive effects of a dependency change are. This is the (bash) alias I use to generate text outputs for diffing locally alias mvndep= "mvn -T 1C dependency:tree -Dverbose"
          Hide
          Steven Rand Steven Rand added a comment -

          Attaching a new patch which excludes all the transitive dependencies not already in hadoop-client. The difference between mvn -T 1C dependency:tree -Dverbose pre and post patch is:

          95,97c95,110
          < [INFO] +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile
          < [INFO] |  \- com.squareup.okhttp:okhttp:jar:2.4.0:compile
          < [INFO] |     \- com.squareup.okio:okio:jar:1.4.0:compile
          ---
          > [INFO] +- org.apache.hadoop:hadoop-hdfs:jar:2.8.0:compile
          > [INFO] |  +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile
          > [INFO] |  |  \- com.squareup.okhttp:okhttp:jar:2.4.0:compile
          > [INFO] |  |     \- com.squareup.okio:okio:jar:1.4.0:compile
          > [INFO] |  +- (com.google.guava:guava:jar:11.0.2:compile - version managed from 16.0.1; omitted for duplicate)
          > [INFO] |  +- (commons-cli:commons-cli:jar:1.2:compile - omitted for duplicate)
          > [INFO] |  +- (commons-codec:commons-codec:jar:1.4:compile - version managed from 1.9; omitted for duplicate)
          > [INFO] |  +- (commons-io:commons-io:jar:2.4:compile - omitted for duplicate)
          > [INFO] |  +- (commons-lang:commons-lang:jar:2.6:compile - version managed from 2.4; omitted for duplicate)
          > [INFO] |  +- (commons-logging:commons-logging:jar:1.1.3:compile - version managed from 1.1.1; omitted for duplicate)
          > [INFO] |  +- (log4j:log4j:jar:1.2.17:compile - version managed from 1.2.16; omitted for duplicate)
          > [INFO] |  +- (com.google.protobuf:protobuf-java:jar:2.5.0:compile - omitted for duplicate)
          > [INFO] |  +- (org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate)
          > [INFO] |  +- (org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate)
          > [INFO] |  +- (xmlenc:xmlenc:jar:0.52:compile - omitted for duplicate)
          > [INFO] |  \- (org.apache.htrace:htrace-core4:jar:4.0.1-incubating:compile - omitted for duplicate)
          

          I think that the dependencies marked "omitted for duplicate" have no effect and don't need to be excluded, but please correct me if I'm misunderstanding, or if it's still good to exclude them for other reasons.

          Re: ZooKeeper and Curator, it seems that hadoop-client already depends on both of those things via hadoop-common?

          Show
          Steven Rand Steven Rand added a comment - Attaching a new patch which excludes all the transitive dependencies not already in hadoop-client. The difference between mvn -T 1C dependency:tree -Dverbose pre and post patch is: 95,97c95,110 < [INFO] +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile < [INFO] | \- com.squareup.okhttp:okhttp:jar:2.4.0:compile < [INFO] | \- com.squareup.okio:okio:jar:1.4.0:compile --- > [INFO] +- org.apache.hadoop:hadoop-hdfs:jar:2.8.0:compile > [INFO] | +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile > [INFO] | | \- com.squareup.okhttp:okhttp:jar:2.4.0:compile > [INFO] | | \- com.squareup.okio:okio:jar:1.4.0:compile > [INFO] | +- (com.google.guava:guava:jar:11.0.2:compile - version managed from 16.0.1; omitted for duplicate) > [INFO] | +- (commons-cli:commons-cli:jar:1.2:compile - omitted for duplicate) > [INFO] | +- (commons-codec:commons-codec:jar:1.4:compile - version managed from 1.9; omitted for duplicate) > [INFO] | +- (commons-io:commons-io:jar:2.4:compile - omitted for duplicate) > [INFO] | +- (commons-lang:commons-lang:jar:2.6:compile - version managed from 2.4; omitted for duplicate) > [INFO] | +- (commons-logging:commons-logging:jar:1.1.3:compile - version managed from 1.1.1; omitted for duplicate) > [INFO] | +- (log4j:log4j:jar:1.2.17:compile - version managed from 1.2.16; omitted for duplicate) > [INFO] | +- (com.google.protobuf:protobuf-java:jar:2.5.0:compile - omitted for duplicate) > [INFO] | +- (org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate) > [INFO] | +- (org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate) > [INFO] | +- (xmlenc:xmlenc:jar:0.52:compile - omitted for duplicate) > [INFO] | \- (org.apache.htrace:htrace-core4:jar:4.0.1-incubating:compile - omitted for duplicate) I think that the dependencies marked "omitted for duplicate" have no effect and don't need to be excluded, but please correct me if I'm misunderstanding, or if it's still good to exclude them for other reasons. Re: ZooKeeper and Curator, it seems that hadoop-client already depends on both of those things via hadoop-common?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 14m 45s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 8m 50s branch-2.8.0 passed
          +1 compile 0m 9s branch-2.8.0 passed with JDK v1.8.0_121
          +1 compile 0m 12s branch-2.8.0 passed with JDK v1.7.0_121
          +1 mvnsite 0m 17s branch-2.8.0 passed
          +1 mvneclipse 0m 15s branch-2.8.0 passed
          +1 javadoc 0m 10s branch-2.8.0 passed with JDK v1.8.0_121
          +1 javadoc 0m 11s branch-2.8.0 passed with JDK v1.7.0_121
          +1 mvninstall 0m 11s the patch passed
          +1 compile 0m 7s the patch passed with JDK v1.8.0_121
          +1 javac 0m 7s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.7.0_121
          +1 javac 0m 10s the patch passed
          +1 mvnsite 0m 13s the patch passed
          +1 mvneclipse 0m 10s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 javadoc 0m 7s the patch passed with JDK v1.8.0_121
          +1 javadoc 0m 8s the patch passed with JDK v1.7.0_121
          +1 unit 0m 9s hadoop-client in the patch passed with JDK v1.7.0_121.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          27m 21s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:5af2af1
          JIRA Issue HDFS-11431
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858940/HDFS-11431-branch-2.8.0.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml
          uname Linux 01dc932fb036 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2.8.0 / b457b9a
          Default Java 1.7.0_121
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
          JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18732/testReport/
          modules C: hadoop-client U: hadoop-client
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18732/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 14m 45s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 8m 50s branch-2.8.0 passed +1 compile 0m 9s branch-2.8.0 passed with JDK v1.8.0_121 +1 compile 0m 12s branch-2.8.0 passed with JDK v1.7.0_121 +1 mvnsite 0m 17s branch-2.8.0 passed +1 mvneclipse 0m 15s branch-2.8.0 passed +1 javadoc 0m 10s branch-2.8.0 passed with JDK v1.8.0_121 +1 javadoc 0m 11s branch-2.8.0 passed with JDK v1.7.0_121 +1 mvninstall 0m 11s the patch passed +1 compile 0m 7s the patch passed with JDK v1.8.0_121 +1 javac 0m 7s the patch passed +1 compile 0m 10s the patch passed with JDK v1.7.0_121 +1 javac 0m 10s the patch passed +1 mvnsite 0m 13s the patch passed +1 mvneclipse 0m 10s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 javadoc 0m 7s the patch passed with JDK v1.8.0_121 +1 javadoc 0m 8s the patch passed with JDK v1.7.0_121 +1 unit 0m 9s hadoop-client in the patch passed with JDK v1.7.0_121. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 27m 21s Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Issue HDFS-11431 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12858940/HDFS-11431-branch-2.8.0.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml uname Linux 01dc932fb036 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8.0 / b457b9a Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/18732/testReport/ modules C: hadoop-client U: hadoop-client Console output https://builds.apache.org/job/PreCommit-HDFS-Build/18732/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          Could we get away with declaring hadoop-hdfs as a dependency of hadoop-client, but excluding the transitive dependencies which aren't needed?

          IMO, I think that sort of defeats the purpose.

          On that topic, Allen, could we have yetus do a before/after diff on dependencies on any patch which touches a POM? That we can always see what the transitive effects of a dependency change are.

          Patches accepted.

          Show
          aw Allen Wittenauer added a comment - Could we get away with declaring hadoop-hdfs as a dependency of hadoop-client, but excluding the transitive dependencies which aren't needed? IMO, I think that sort of defeats the purpose. On that topic, Allen, could we have yetus do a before/after diff on dependencies on any patch which touches a POM? That we can always see what the transitive effects of a dependency change are. Patches accepted.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Allen Wittenauer

          I think that sort of defeats the purpose.

          no, because it means the people downstream don't have to deal with all the server-side JARs.

          Patches accepted

          how did I guess that was going to be the response. Oh, wait, it's what I usually use too...

          Show
          stevel@apache.org Steve Loughran added a comment - Allen Wittenauer I think that sort of defeats the purpose. no, because it means the people downstream don't have to deal with all the server-side JARs. Patches accepted how did I guess that was going to be the response. Oh, wait, it's what I usually use too...
          Hide
          stevel@apache.org Steve Loughran added a comment -

          This is the diff between 2.8.0 with and without the patch

          Essentially: it does pull in stuff, but that's all duplicate. Curious about why I'm seeing version updates of jackson 1.9 and log4j, but that's unrelated to this patch as it's happening in hadoop-hdfs.

          < [INFO] +- org.apache.hadoop:hadoop-hdfs:jar:2.8.0:compile
          < [INFO] |  +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile
          < [INFO] |  |  \- com.squareup.okhttp:okhttp:jar:2.4.0:compile
          < [INFO] |  |     \- com.squareup.okio:okio:jar:1.4.0:compile
          < [INFO] |  +- (com.google.guava:guava:jar:11.0.2:compile - version managed from 16.0.1; omitted for duplicate)
          < [INFO] |  +- (commons-cli:commons-cli:jar:1.2:compile - omitted for duplicate)
          < [INFO] |  +- (commons-codec:commons-codec:jar:1.4:compile - version managed from 1.9; omitted for duplicate)
          < [INFO] |  +- (commons-io:commons-io:jar:2.4:compile - omitted for duplicate)
          < [INFO] |  +- (commons-lang:commons-lang:jar:2.6:compile - version managed from 2.4; omitted for duplicate)
          < [INFO] |  +- (commons-logging:commons-logging:jar:1.1.3:compile - version managed from 1.1.1; omitted for duplicate)
          < [INFO] |  +- (log4j:log4j:jar:1.2.17:compile - version managed from 1.2.16; omitted for duplicate)
          < [INFO] |  +- (com.google.protobuf:protobuf-java:jar:2.5.0:compile - omitted for duplicate)
          < [INFO] |  +- (org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate)
          < [INFO] |  +- (org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate)
          < [INFO] |  +- (xmlenc:xmlenc:jar:0.52:compile - omitted for duplicate)
          < [INFO] |  \- (org.apache.htrace:htrace-core4:jar:4.0.1-incubating:compile - omitted for duplicate)
          ---
          > [INFO] +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile
          > [INFO] |  \- com.squareup.okhttp:okhttp:jar:2.4.0:compile
          > [INFO] |     \- com.squareup.okio:okio:jar:1.4.0:compile
          
          Show
          stevel@apache.org Steve Loughran added a comment - This is the diff between 2.8.0 with and without the patch Essentially: it does pull in stuff, but that's all duplicate. Curious about why I'm seeing version updates of jackson 1.9 and log4j, but that's unrelated to this patch as it's happening in hadoop-hdfs. < [INFO] +- org.apache.hadoop:hadoop-hdfs:jar:2.8.0:compile < [INFO] | +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile < [INFO] | | \- com.squareup.okhttp:okhttp:jar:2.4.0:compile < [INFO] | | \- com.squareup.okio:okio:jar:1.4.0:compile < [INFO] | +- (com.google.guava:guava:jar:11.0.2:compile - version managed from 16.0.1; omitted for duplicate) < [INFO] | +- (commons-cli:commons-cli:jar:1.2:compile - omitted for duplicate) < [INFO] | +- (commons-codec:commons-codec:jar:1.4:compile - version managed from 1.9; omitted for duplicate) < [INFO] | +- (commons-io:commons-io:jar:2.4:compile - omitted for duplicate) < [INFO] | +- (commons-lang:commons-lang:jar:2.6:compile - version managed from 2.4; omitted for duplicate) < [INFO] | +- (commons-logging:commons-logging:jar:1.1.3:compile - version managed from 1.1.1; omitted for duplicate) < [INFO] | +- (log4j:log4j:jar:1.2.17:compile - version managed from 1.2.16; omitted for duplicate) < [INFO] | +- (com.google.protobuf:protobuf-java:jar:2.5.0:compile - omitted for duplicate) < [INFO] | +- (org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate) < [INFO] | +- (org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile - version managed from 1.8.8; omitted for duplicate) < [INFO] | +- (xmlenc:xmlenc:jar:0.52:compile - omitted for duplicate) < [INFO] | \- (org.apache.htrace:htrace-core4:jar:4.0.1-incubating:compile - omitted for duplicate) --- > [INFO] +- org.apache.hadoop:hadoop-hdfs-client:jar:2.8.0:compile > [INFO] | \- com.squareup.okhttp:okhttp:jar:2.4.0:compile > [INFO] | \- com.squareup.okio:okio:jar:1.4.0:compile
          Hide
          stevel@apache.org Steve Loughran added a comment -

          sorry, missed that Steven Rand had already done this check. never mind.

          +1 for this. Nothing new is being added except the hadoop-hdfs JAR

          I will add something to the release notes here "maven users will find that the hadoop-client has stripped out some of the transtive dependences needed only for the hdfs server, ..."

          Show
          stevel@apache.org Steve Loughran added a comment - sorry, missed that Steven Rand had already done this check. never mind. +1 for this. Nothing new is being added except the hadoop-hdfs JAR I will add something to the release notes here "maven users will find that the hadoop-client has stripped out some of the transtive dependences needed only for the hdfs server, ..."
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11413 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11413/)
          HDFS-11431. hadoop-hdfs-client JAR does not include (stevel: rev cd976b263be39bd4f75b1c94c09f82c862e04b30)

          • (edit) hadoop-client-modules/hadoop-client/pom.xml
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11413 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11413/ ) HDFS-11431 . hadoop-hdfs-client JAR does not include (stevel: rev cd976b263be39bd4f75b1c94c09f82c862e04b30) (edit) hadoop-client-modules/hadoop-client/pom.xml
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK, this is in. time to spin the 2.8 RC again

          Show
          stevel@apache.org Steve Loughran added a comment - OK, this is in. time to spin the 2.8 RC again
          Hide
          aw Allen Wittenauer added a comment - - edited

          leveldbjni-all

          Wait, what? Why does this require leveldbjni? (Never mind all the problems that jar causes.)

          EDIT: NM, I misread that.

          Show
          aw Allen Wittenauer added a comment - - edited leveldbjni-all Wait, what? Why does this require leveldbjni? (Never mind all the problems that jar causes.) EDIT: NM, I misread that.
          Hide
          kshukla Kuhu Shukla added a comment - - edited

          mvn install is breaking for me with the error that duplicate classes were found while installing "Apache Hadoop Client Packaging Invariants for Test" after this check-in. Let me know if I am missing something here. Thanks a lot!

          [INFO] Compiling 1 source file to /home/jenkins/jenkins-slave/workspace/Hadoop-trunk-Commit/source/hadoop-client-modules/hadoop-client-integration-tests/target/test-classes
          [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
          Duplicate classes found:
          
          

          https://builds.apache.org/job/Hadoop-trunk-Commit/11414/console

          Show
          kshukla Kuhu Shukla added a comment - - edited mvn install is breaking for me with the error that duplicate classes were found while installing "Apache Hadoop Client Packaging Invariants for Test" after this check-in. Let me know if I am missing something here. Thanks a lot! [INFO] Compiling 1 source file to /home/jenkins/jenkins-slave/workspace/Hadoop-trunk-Commit/source/hadoop-client-modules/hadoop-client-integration-tests/target/test-classes [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: https://builds.apache.org/job/Hadoop-trunk-Commit/11414/console
          Hide
          andrew.wang Andrew Wang added a comment -

          Did this run against trunk precommit? Sounds like this broke the shaded client.

          Show
          andrew.wang Andrew Wang added a comment - Did this run against trunk precommit? Sounds like this broke the shaded client.
          Hide
          anu Anu Engineer added a comment -

          Kuhu Shukla I ran into this problem while doing a merge with Ozone branch. Sean Busbey was kind enough to explain the issue to me. I still haven't fixed it though.
          Here is the JIRA tracking that issue :
          https://issues.apache.org/jira/browse/HDFS-11496

          Show
          anu Anu Engineer added a comment - Kuhu Shukla I ran into this problem while doing a merge with Ozone branch. Sean Busbey was kind enough to explain the issue to me. I still haven't fixed it though. Here is the JIRA tracking that issue : https://issues.apache.org/jira/browse/HDFS-11496
          Hide
          djp Junping Du added a comment -

          Did this run against trunk precommit?

          I don't think so. The patch should only run against branch-2.8 given the name marked.

          I verified that branch-2 and branch-2.8 are running well. May be we should revert the patch from trunk and file a separated JIRA to track trunk effort? - Given the fixes for trunk/branch-2 should be significantly different.

          Show
          djp Junping Du added a comment - Did this run against trunk precommit? I don't think so. The patch should only run against branch-2.8 given the name marked. I verified that branch-2 and branch-2.8 are running well. May be we should revert the patch from trunk and file a separated JIRA to track trunk effort? - Given the fixes for trunk/branch-2 should be significantly different.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          uh, only did on branch-2.x and I run trunk with -DskipShading as I value my time. How about I revert from trunk for now.

          I am not seeing problems with maven builds on 2.8

          Show
          stevel@apache.org Steve Loughran added a comment - uh, only did on branch-2.x and I run trunk with -DskipShading as I value my time. How about I revert from trunk for now. I am not seeing problems with maven builds on 2.8
          Hide
          stevel@apache.org Steve Loughran added a comment -

          rolled back from trunk, re-opened.

          Show
          stevel@apache.org Steve Loughran added a comment - rolled back from trunk, re-opened.
          Hide
          andrew.wang Andrew Wang added a comment -

          Let's close this as fixed only in branch-2.8.0 / branch-2.8. I also reverted this from branch-2.

          Filed HDFS-11538 to do the real fix for 2.9 and 3.0.

          Show
          andrew.wang Andrew Wang added a comment - Let's close this as fixed only in branch-2.8.0 / branch-2.8. I also reverted this from branch-2. Filed HDFS-11538 to do the real fix for 2.9 and 3.0.
          Hide
          djp Junping Du added a comment -

          Branch-2 work well. HDFS-11538 should only for 3.0.

          Show
          djp Junping Du added a comment - Branch-2 work well. HDFS-11538 should only for 3.0.
          Hide
          djp Junping Du added a comment -

          Hi Andrew Wang, do you have more concern for patch here landing on branch-2? If not, I will revert the previous revert on branch-2.

          Show
          djp Junping Du added a comment - Hi Andrew Wang , do you have more concern for patch here landing on branch-2? If not, I will revert the previous revert on branch-2.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11416 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11416/)
          Revert "HDFS-11431. hadoop-hdfs-client JAR does not include (stevel: rev 79ede403eed49f77e3f0e4b103fc8619cac67168)

          • (edit) hadoop-client-modules/hadoop-client/pom.xml
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11416 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11416/ ) Revert " HDFS-11431 . hadoop-hdfs-client JAR does not include (stevel: rev 79ede403eed49f77e3f0e4b103fc8619cac67168) (edit) hadoop-client-modules/hadoop-client/pom.xml
          Hide
          djp Junping Du added a comment -

          Reverted.

          Show
          djp Junping Du added a comment - Reverted.
          Hide
          andrew.wang Andrew Wang added a comment -

          Why not try and fix it properly for 2.9? It's marked as a blocker for 3.0.0-alpha3, which is likely coming out before 2.9.0.

          I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is.

          Show
          andrew.wang Andrew Wang added a comment - Why not try and fix it properly for 2.9? It's marked as a blocker for 3.0.0-alpha3, which is likely coming out before 2.9.0. I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is.
          Hide
          andrew.wang Andrew Wang added a comment -

          The other note is that adding a dependency on hadoop-hdfs, even with deps excluded, means that we fail to achieve the very purpose of the hadoop-hdfs-client refactor. The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases.

          Show
          andrew.wang Andrew Wang added a comment - The other note is that adding a dependency on hadoop-hdfs, even with deps excluded, means that we fail to achieve the very purpose of the hadoop-hdfs-client refactor. The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases.
          Hide
          djp Junping Du added a comment -

          Why not try and fix it properly for 2.9?

          +1 on fixing it more properly for 2.9. However, we shouldn't be risky if the missing class are not moved by then or other classes are found missing. I checked the code with some HDFS guy it seems ConfiguredFailoverProxyProvider is not very clean to move as include some server side logic. So, I think keeping this patch on branch-2 is benign which is a different case for trunk where build will be broken by the patch.

          I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is.

          I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track.

          The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases.

          If we have elegant fixes, I am OK with get fixes in. Otherwise, HDFS-6200 doesn't achieve its goal. However, I would rather exclude one feature which could cause regression rather than stopping the whole branch-2 release trains. In this sense, the patch here is still benign for branch-2.

          Show
          djp Junping Du added a comment - Why not try and fix it properly for 2.9? +1 on fixing it more properly for 2.9. However, we shouldn't be risky if the missing class are not moved by then or other classes are found missing. I checked the code with some HDFS guy it seems ConfiguredFailoverProxyProvider is not very clean to move as include some server side logic. So, I think keeping this patch on branch-2 is benign which is a different case for trunk where build will be broken by the patch. I think it also makes the tracking easier, since otherwise the fix versions don't reflect where the code is. I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track. The current fix thus falls in the "hack" category, and I'd rather we not default to carrying it forward to future bramch-2 releases. If we have elegant fixes, I am OK with get fixes in. Otherwise, HDFS-6200 doesn't achieve its goal. However, I would rather exclude one feature which could cause regression rather than stopping the whole branch-2 release trains. In this sense, the patch here is still benign for branch-2.
          Hide
          andrew.wang Andrew Wang added a comment -

          I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track.

          Sorry, I meant "target version" rather than "fix version" here. I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3, but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431, and it wouldn't for trunk. It makes tracking what's where more complicated.

          Our current practice tries to make "newer" branches supersets of each other, which also includes trunk. That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action.

          Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431". It's also easy to revisit this when 2.9.0 is closer to an RC.

          Show
          andrew.wang Andrew Wang added a comment - I don't understand your point. In our current practice, all 2.8.x patches should be in branch-2 first. I think that's easier for track. Sorry, I meant "target version" rather than "fix version" here. I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3, but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431 , and it wouldn't for trunk. It makes tracking what's where more complicated. Our current practice tries to make "newer" branches supersets of each other, which also includes trunk. That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action. Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431 ". It's also easy to revisit this when 2.9.0 is closer to an RC.
          Hide
          djp Junping Du added a comment -

          I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3

          Sure. Add back 2.9.0 to HDFS-11538.

          but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431, and it wouldn't for trunk. It makes tracking what's where more complicated.

          We want to revert HDFS-11431 from trunk because it cause build failure. We don't want to revert HDFS-11431 from branch-2 because it works (even like a hack way as your said). I would like branch-2 to keep in a safe place even adding a bit more effort to tracking differences between branch-2 and trunk.

          That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action.

          Agree. That's why we should have one patch for branch-2/branch-2.8 and have a different patch for trunk later.

          Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431". It's also easy to revisit this when 2.9.0 is closer to an RC.

          I don't see HDFS-13715 will get immediately fixed in short term also - it even haven't get any assignee yet.

          My key points here:
          1. HDFS-13715 is somethings TBD, for branch-2. better to have HDFS-11431 patch than nothing.
          2. HDFS-13715 is not a blocker but something nice to have for 2.9. As I mentioned earlier, the whole feature to make hdfs-client jar thinner is not a must given many features on 2.9 are also in pipeline.
          3. If you really think tracking the revert of this patch (when we have HDFS-13715) is a big problem, then we could file a separated JIRA and mark that one as a blocker for 2.9 to revisit reverting patch here when we are in RC stage.

          Make sense?

          Show
          djp Junping Du added a comment - I want to target HDFS-11538 at 2.9.0 and 3.0.0-alpha3 Sure. Add back 2.9.0 to HDFS-11538 . but if HDFS-11431 stays in branch-2, then committing HDFS-11538 to branch-2 also requires reverting HDFS-11431 , and it wouldn't for trunk. It makes tracking what's where more complicated. We want to revert HDFS-11431 from trunk because it cause build failure. We don't want to revert HDFS-11431 from branch-2 because it works (even like a hack way as your said). I would like branch-2 to keep in a safe place even adding a bit more effort to tracking differences between branch-2 and trunk. That's not possible here since HDFS-11431 doesn't work for trunk. Which is why I suggested the above course of action. Agree. That's why we should have one patch for branch-2/branch-2.8 and have a different patch for trunk later. Like I said before too, since 2.9.0 isn't imminently being released, I'd prefer the default action be "fix HDFS-13715" than "maintain the hack of HDFS-11431 ". It's also easy to revisit this when 2.9.0 is closer to an RC. I don't see HDFS-13715 will get immediately fixed in short term also - it even haven't get any assignee yet. My key points here: 1. HDFS-13715 is somethings TBD, for branch-2. better to have HDFS-11431 patch than nothing. 2. HDFS-13715 is not a blocker but something nice to have for 2.9. As I mentioned earlier, the whole feature to make hdfs-client jar thinner is not a must given many features on 2.9 are also in pipeline. 3. If you really think tracking the revert of this patch (when we have HDFS-13715) is a big problem, then we could file a separated JIRA and mark that one as a blocker for 2.9 to revisit reverting patch here when we are in RC stage. Make sense?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK. Leave as is for the release and we'll talk about 3.0 separately

          Show
          stevel@apache.org Steve Loughran added a comment - OK. Leave as is for the release and we'll talk about 3.0 separately
          Hide
          drankye Kai Zheng added a comment -

          Hi,

          Could anybody help clarify a little bit for me to understand the issue here and the new issue HDFS-11538 to solve? Thanks!

          The issue title said:

          hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider

          And the description said:

          The hadoop-hdfs-client-2.8.0.jar file does include the ConfiguredFailoverProxyProvider class. This breaks client applications that use this class to communicate with the active NameNode in an HA deployment of HDFS.

          Since for 2.8.0 the jar does include the class so there is no problem for that version right? But the target and fix versions were marked as 2.8.0, which is why I'm confused.

          Anyway, for trunk, this is left to be fixed in HDFS-11538, where we would have an elegant fix that only moves the needed class into hadoop-hdfs-client jar but without introducing the NN server side classes, so we have to refactor the codes in ConfiguredFailoverProxyProvider. Right?

          Show
          drankye Kai Zheng added a comment - Hi, Could anybody help clarify a little bit for me to understand the issue here and the new issue HDFS-11538 to solve? Thanks! The issue title said: hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider And the description said: The hadoop-hdfs-client-2.8.0.jar file does include the ConfiguredFailoverProxyProvider class. This breaks client applications that use this class to communicate with the active NameNode in an HA deployment of HDFS. Since for 2.8.0 the jar does include the class so there is no problem for that version right? But the target and fix versions were marked as 2.8.0 , which is why I'm confused. Anyway, for trunk, this is left to be fixed in HDFS-11538 , where we would have an elegant fix that only moves the needed class into hadoop-hdfs-client jar but without introducing the NN server side classes, so we have to refactor the codes in ConfiguredFailoverProxyProvider . Right?
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi Kai,

          This JIRA (HDFS-11431) adds hadoop-hdfs as a dependency of hadoop-hdfs-client (and thus also hadoop-client). This pulls all the server JARs back in. It was committed as a quick fix to get 2.8.0 released.

          As you noted, HDFS-11538 is intended as a better long term solution that moves CFPP to hadoop-hdfs-client, so it no longer needs to pull in the full hadoop-hdfs dependency (the server-side jar).

          This also highlights a significant lack of testing of the hdfs client artifact. The hdfs client split didn't also split the tests, so we have essentially no test coverage for the client module by itself. It'd be great for HDFS-11538 to include hdfs client unit tests to help address this.

          Show
          andrew.wang Andrew Wang added a comment - Hi Kai, This JIRA ( HDFS-11431 ) adds hadoop-hdfs as a dependency of hadoop-hdfs-client (and thus also hadoop-client). This pulls all the server JARs back in. It was committed as a quick fix to get 2.8.0 released. As you noted, HDFS-11538 is intended as a better long term solution that moves CFPP to hadoop-hdfs-client, so it no longer needs to pull in the full hadoop-hdfs dependency (the server-side jar). This also highlights a significant lack of testing of the hdfs client artifact. The hdfs client split didn't also split the tests, so we have essentially no test coverage for the client module by itself. It'd be great for HDFS-11538 to include hdfs client unit tests to help address this.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Andrew for the help! Let me copy your further thoughts to the new issue since it'll help.

          Show
          drankye Kai Zheng added a comment - Thanks Andrew for the help! Let me copy your further thoughts to the new issue since it'll help.

            People

            • Assignee:
              Steven Rand Steven Rand
              Reporter:
              Steven Rand Steven Rand
            • Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development