Whirr
  1. Whirr
  2. WHIRR-168

Extend client side configurations and support core-site.xml, mapred-site.xml and hdfs-site.xml instead of hadoop-site.xml

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 0.9.0
    • Component/s: core, service/hadoop
    • Labels:
      None
    • Environment:

      ec2

      Description

      We have a generated .whirr/<hadoop-cluster-name>/hadoop-proxy.sh which contains a hard coded port value, the 6666.

      In order to be able to start multiple clusters from the same console I needed a simple mechanism to be able to parametrize this port number.
      Therefore is required to extend client side configurations, in the same way as WHIRR-55, to be configurable a 'whirr.hadoop-client.hadoop.socks.server' to something like
      whirr.hadoop-client.hadoop.socks.server=localhost:6667
      The default port will remain of course the 6666.

      1. whirr-168-3.patch
        19 kB
        Tibor Kiss
      2. whirr-168-2.patch
        19 kB
        Tibor Kiss
      3. whirr-168-1.patch
        17 kB
        Tibor Kiss
      4. mapred-site.xml
        1 kB
        Tibor Kiss
      5. local-socks-proxy-address.patch
        6 kB
        Tibor Kiss
      6. integration-server-logs.tar.gz
        26 kB
        Tibor Kiss
      7. hdfs-site.xml
        0.5 kB
        Tibor Kiss
      8. core-site.xml
        1 kB
        Tibor Kiss

        Issue Links

          Activity

          Hide
          David Rosenstrauch added a comment -

          It looks like this enhancement hasn't been released yet. Is there any way to make whirr set properties in the generated client-side hadoop-site.xml file?

          Show
          David Rosenstrauch added a comment - It looks like this enhancement hasn't been released yet. Is there any way to make whirr set properties in the generated client-side hadoop-site.xml file?
          Andrei Savu made changes -
          Fix Version/s 0.9.0 [ 12319840 ]
          Fix Version/s 0.8.0 [ 12318880 ]
          Andrei Savu made changes -
          Fix Version/s 0.8.0 [ 12318880 ]
          Fix Version/s 0.7.0 [ 12317571 ]
          Hide
          Andrei Savu added a comment -

          Moving to 0.8.0 - not a critical issue.

          Show
          Andrei Savu added a comment - Moving to 0.8.0 - not a critical issue.
          Andrei Savu made changes -
          Link This issue relates to WHIRR-224 [ WHIRR-224 ]
          Tom White made changes -
          Fix Version/s 0.7.0 [ 12317571 ]
          Andrei Savu made changes -
          Fix Version/s 0.6.0 [ 12316468 ]
          Hide
          Andrei Savu added a comment -

          Let's make this a priority for 0.7.0.

          Show
          Andrei Savu added a comment - Let's make this a priority for 0.7.0.
          Andrei Savu made changes -
          Link This issue blocks WHIRR-301 [ WHIRR-301 ]
          Andrei Savu made changes -
          Link This issue is related to WHIRR-294 [ WHIRR-294 ]
          Andrei Savu made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Fix Version/s 0.6.0 [ 12316468 ]
          Hide
          Andrei Savu added a comment -

          I believe it would be useful to update & fix this patch for 0.6.0. I would use it for parallel test execution by selecting a random unused port for the SOCKS proxy.

          Show
          Andrei Savu added a comment - I believe it would be useful to update & fix this patch for 0.6.0. I would use it for parallel test execution by selecting a random unused port for the SOCKS proxy.
          Tibor Kiss made changes -
          Attachment integration-server-logs.tar.gz [ 12471319 ]
          Hide
          Tibor Kiss added a comment -

          this is the listing from hdfs

          drwxrwxrwx   - root   supergroup          0 2011-02-17 19:44 /user/root
          -rw-r--r--   3 root supergroup          4 2011-02-17 19:44 /user/root/input
          drwxrwxrwx   - root supergroup          0 2011-02-17 19:44 /user/root/output
          drwxrwxrwx   - root supergroup          0 2011-02-17 19:44 /user/root/output/_logs
          drwxrwxrwx   - root supergroup          0 2011-02-17 19:44 /user/root/output/_logs/history
          -rw-r--r--   3 root supergroup      17117 2011-02-17 19:44 /user/root/output/_logs/history/ec2-75-101-227-207.compute-1.amazonaws.com_1297971833111_job_201102171943_0001_conf.xml
          -rw-r--r--   3 root supergroup          0 2011-02-17 19:44 /user/root/output/_logs/history/ec2-75-101-227-207.compute-1.amazonaws.com_1297971833111_job_201102171943_0001_root_NA
          drwxr-xr-x   - root supergroup          0 2011-02-17 19:44 /user/root/output/_temporary
          drwxr-xr-x   - root supergroup          0 2011-02-17 19:44 /user/root/output/_temporary/_attempt_201102171943_0001_r_000002_0
          -rw-r--r--   3 root supergroup          0 2011-02-17 19:44 /user/root/output/_temporary/_attempt_201102171943_0001_r_000002_0/part-00002
          -rw-r--r--   3 root supergroup          0 2011-02-17 19:44 /user/root/output/part-00000
          -rw-r--r--   3 root supergroup          0 2011-02-17 19:44 /user/root/output/part-00001
          

          On the web console the job it has 10 tasks all of them terminated successfully

          I also attached the integration-server-logs.tar.gz.
          Somebody may help looking into these logs, because I didn't observed the problem.

          Show
          Tibor Kiss added a comment - this is the listing from hdfs drwxrwxrwx - root supergroup 0 2011-02-17 19:44 /user/root -rw-r--r-- 3 root supergroup 4 2011-02-17 19:44 /user/root/input drwxrwxrwx - root supergroup 0 2011-02-17 19:44 /user/root/output drwxrwxrwx - root supergroup 0 2011-02-17 19:44 /user/root/output/_logs drwxrwxrwx - root supergroup 0 2011-02-17 19:44 /user/root/output/_logs/history -rw-r--r-- 3 root supergroup 17117 2011-02-17 19:44 /user/root/output/_logs/history/ec2-75-101-227-207.compute-1.amazonaws.com_1297971833111_job_201102171943_0001_conf.xml -rw-r--r-- 3 root supergroup 0 2011-02-17 19:44 /user/root/output/_logs/history/ec2-75-101-227-207.compute-1.amazonaws.com_1297971833111_job_201102171943_0001_root_NA drwxr-xr-x - root supergroup 0 2011-02-17 19:44 /user/root/output/_temporary drwxr-xr-x - root supergroup 0 2011-02-17 19:44 /user/root/output/_temporary/_attempt_201102171943_0001_r_000002_0 -rw-r--r-- 3 root supergroup 0 2011-02-17 19:44 /user/root/output/_temporary/_attempt_201102171943_0001_r_000002_0/part-00002 -rw-r--r-- 3 root supergroup 0 2011-02-17 19:44 /user/root/output/part-00000 -rw-r--r-- 3 root supergroup 0 2011-02-17 19:44 /user/root/output/part-00001 On the web console the job it has 10 tasks all of them terminated successfully I also attached the integration-server-logs.tar.gz. Somebody may help looking into these logs, because I didn't observed the problem.
          Hide
          Tibor Kiss added a comment -

          I run a integration test just for apache hadoop but still fails here.

          -------------------------------------------------------------------------------
          Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest
          -------------------------------------------------------------------------------
          Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 611.551 sec <<< FAILURE!
          test(org.apache.whirr.service.hadoop.integration.HadoopServiceTest)  Time elapsed: 112.188 sec  <<< FAILURE!
          junit.framework.ComparisonFailure: null expected:<a     1> but was:<null>
                  at junit.framework.Assert.assertEquals(Assert.java:81)
                  at junit.framework.Assert.assertEquals(Assert.java:87)
                  at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.test(HadoopServiceTest.java:87)
          
          

          The nodes it was started nicely, I was logging into, checked the namenode logs and tasktracker logs.
          A job it was running aparently without problems. Unfortunately this early shutdown prohibited more detailed investigation.
          Temporarily I need to get rid of this early shutdown.

          Show
          Tibor Kiss added a comment - I run a integration test just for apache hadoop but still fails here. ------------------------------------------------------------------------------- Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest ------------------------------------------------------------------------------- Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 611.551 sec <<< FAILURE! test(org.apache.whirr.service.hadoop.integration.HadoopServiceTest) Time elapsed: 112.188 sec <<< FAILURE! junit.framework.ComparisonFailure: null expected:<a 1> but was:< null > at junit.framework.Assert.assertEquals(Assert.java:81) at junit.framework.Assert.assertEquals(Assert.java:87) at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.test(HadoopServiceTest.java:87) The nodes it was started nicely, I was logging into, checked the namenode logs and tasktracker logs. A job it was running aparently without problems. Unfortunately this early shutdown prohibited more detailed investigation. Temporarily I need to get rid of this early shutdown.
          Hide
          Tom White added a comment -

          That looks like the problem - do you want to test it works?

          Thinking about this more, should we support configuration by users in standard Hadoop XML config files? Doing so would avoid problems with quoting that users might not be aware of. This could be in addition to the way we currently (so I'm not proposing we don't commit this patch). Thoughts?

          Show
          Tom White added a comment - That looks like the problem - do you want to test it works? Thinking about this more, should we support configuration by users in standard Hadoop XML config files? Doing so would avoid problems with quoting that users might not be aware of. This could be in addition to the way we currently (so I'm not proposing we don't commit this patch). Thoughts?
          Hide
          Tibor Kiss added a comment -

          The fix is straingforward.
          In the whirr-hadoop-default.properties we have to escape the list separator character. Like the following.

          hadoop-client-common.hadoop.job.ugi=root\,root
          
          Show
          Tibor Kiss added a comment - The fix is straingforward. In the whirr-hadoop-default.properties we have to escape the list separator character. Like the following. hadoop-client-common.hadoop.job.ugi=root\,root
          Hide
          Tibor Kiss added a comment -

          Maybe this is the problem.... If I would like to load that hadoop.job.ugi, the Configuration it will return an ArrayList..

          for example

          Configuration conf = new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES);
          Configuration clientConf = conf.subset("hadoop-client-common");
          List<String> jobUgi = (List<String>)clientConf.getProperty("hadoop.job.ugi");
          System.out.println(">" + jobUgi + "<");
          

          Will print out

          >[root, root]<
          

          This is the problem!
          Is there an escape character before ',' to not get ArrayList?

          Show
          Tibor Kiss added a comment - Maybe this is the problem.... If I would like to load that hadoop.job.ugi, the Configuration it will return an ArrayList.. for example Configuration conf = new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES); Configuration clientConf = conf.subset( "hadoop-client-common" ); List< String > jobUgi = (List< String >)clientConf.getProperty( "hadoop.job.ugi" ); System .out.println( ">" + jobUgi + "<" ); Will print out >[root, root]< This is the problem! Is there an escape character before ',' to not get ArrayList?
          Tibor Kiss made changes -
          Attachment core-site.xml [ 12471305 ]
          Attachment hdfs-site.xml [ 12471306 ]
          Attachment mapred-site.xml [ 12471307 ]
          Hide
          Tibor Kiss added a comment -

          This was the old client side properties setup

          -  private Properties createClientSideProperties(ClusterSpec clusterSpec,
          -      InetAddress namenode, InetAddress jobtracker) throws IOException {
          -    Properties config = new Properties();
          -    config.setProperty("hadoop.job.ugi", "root,root");
          -    config.setProperty("fs.default.name", String.format("hdfs://%s:8020/", DnsUtil.resolveAddress(namenode.getHostAddress())));
          -    config.setProperty("mapred.job.tracker", String.format("%s:8021", DnsUtil.resolveAddress(jobtracker.getHostAddress())));
          -    config.setProperty("hadoop.socks.server", "localhost:6666");
          -    config.setProperty("hadoop.rpc.socket.factory.class.default", "org.apache.hadoop.net.SocksSocketFactory");
          -    if (clusterSpec.getProvider().endsWith("ec2")) {
          -      config.setProperty("fs.s3.awsAccessKeyId", clusterSpec.getIdentity());
          -      config.setProperty("fs.s3.awsSecretAccessKey", clusterSpec.getCredential());
          -      config.setProperty("fs.s3n.awsAccessKeyId", clusterSpec.getIdentity());
          -      config.setProperty("fs.s3n.awsSecretAccessKey", clusterSpec.getCredential());
          -    }
          

          and now the 3 static properties are set by the

          +# Client Common
          +hadoop-client-common.hadoop.job.ugi=root,root
          +hadoop-client-common.hadoop.rpc.socket.factory.class.default=org.apache.hadoop.net.SocksSocketFactory
          +hadoop-client-common.hadoop.socks.server=localhost:6666
          

          it generated in core-site.xml file the following

            <property>
              <name>hadoop.job.ugi</name>
              <value>[root, root]</value>
            </property>
          

          I'm not sure about "[" "]" simbols. What do you think?

          I attach the 3 generated site.xml- files.

          Show
          Tibor Kiss added a comment - This was the old client side properties setup - private Properties createClientSideProperties(ClusterSpec clusterSpec, - InetAddress namenode, InetAddress jobtracker) throws IOException { - Properties config = new Properties(); - config.setProperty( "hadoop.job.ugi" , "root,root" ); - config.setProperty( "fs. default .name" , String .format( "hdfs: //%s:8020/" , DnsUtil.resolveAddress(namenode.getHostAddress()))); - config.setProperty( "mapred.job.tracker" , String .format( "%s:8021" , DnsUtil.resolveAddress(jobtracker.getHostAddress()))); - config.setProperty( "hadoop.socks.server" , "localhost:6666" ); - config.setProperty( "hadoop.rpc.socket.factory.class. default " , "org.apache.hadoop.net.SocksSocketFactory" ); - if (clusterSpec.getProvider().endsWith( "ec2" )) { - config.setProperty( "fs.s3.awsAccessKeyId" , clusterSpec.getIdentity()); - config.setProperty( "fs.s3.awsSecretAccessKey" , clusterSpec.getCredential()); - config.setProperty( "fs.s3n.awsAccessKeyId" , clusterSpec.getIdentity()); - config.setProperty( "fs.s3n.awsSecretAccessKey" , clusterSpec.getCredential()); - } and now the 3 static properties are set by the +# Client Common +hadoop-client-common.hadoop.job.ugi=root,root +hadoop-client-common.hadoop.rpc.socket.factory.class. default =org.apache.hadoop.net.SocksSocketFactory +hadoop-client-common.hadoop.socks.server=localhost:6666 it generated in core-site.xml file the following <property> <name>hadoop.job.ugi</name> <value>[root, root]</value> </property> I'm not sure about " [" "] " simbols. What do you think? I attach the 3 generated site.xml- files.
          Hide
          Andrei Savu added a comment -

          I 'm seeing the same failure. I suspect this is related to the following line in services/hadoop/.../whirr-hadoop-default.properties:

          hadoop-client-common.hadoop.job.ugi=root,root
          

          Hadoop integration tests are still failing but with a different error:

          -------------------------------------------------------------------------------
          Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest
          -------------------------------------------------------------------------------
          Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 553.398 sec <<< FAILURE!
          test(org.apache.whirr.service.hadoop.integration.HadoopServiceTest)  Time elapsed: 113.124 sec  <<< FAILURE!
          junit.framework.ComparisonFailure: null expected:<a 1> but was:<null>
              at junit.framework.Assert.assertEquals(Assert.java:81)
              at junit.framework.Assert.assertEquals(Assert.java:87)
              at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.test(HadoopServiceTest.java:87)
          
          Show
          Andrei Savu added a comment - I 'm seeing the same failure. I suspect this is related to the following line in services/hadoop/.../whirr-hadoop-default.properties : hadoop-client-common.hadoop.job.ugi=root,root Hadoop integration tests are still failing but with a different error: ------------------------------------------------------------------------------- Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest ------------------------------------------------------------------------------- Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 553.398 sec <<< FAILURE! test(org.apache.whirr.service.hadoop.integration.HadoopServiceTest) Time elapsed: 113.124 sec <<< FAILURE! junit.framework.ComparisonFailure: null expected:<a 1> but was:< null > at junit.framework.Assert.assertEquals(Assert.java:81) at junit.framework.Assert.assertEquals(Assert.java:87) at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.test(HadoopServiceTest.java:87)
          Hide
          Tom White added a comment -

          The code looks good, but when I tried running the Hadoop integration test I got:

          java.io.IOException: Failed to get the current user's information.
                  at org.apache.hadoop.mapred.JobClient.getUGI(JobClient.java:681)
                  at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:429)
                  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:423)
                  at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:410)
                  at org.apache.whirr.service.hadoop.integration.HadoopServiceController.startup(HadoopServiceController.java:87)
                  at org.apache.whirr.service.hadoop.integration.HadoopServiceController.ensureClusterRunning(HadoopServiceController.java:66)
                  at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.setUp(HadoopServiceTest.java:56)
                  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                  at java.lang.reflect.Method.invoke(Method.java:597)
                  at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
                  at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
                  at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
                  at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
                  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
                  at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
                  at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
                  at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:115)
                  at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:102)
                  at org.apache.maven.surefire.Surefire.run(Surefire.java:180)
                  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                  at java.lang.reflect.Method.invoke(Method.java:597)
                  at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:350)
                  at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021)
          Caused by: javax.security.auth.login.LoginException: Login failed: Parameter does contain at least one user name and one group name
                  at org.apache.hadoop.security.UnixUserGroupInformation.readFromConf(UnixUserGroupInformation.java:219)
                  at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:298)
                  at org.apache.hadoop.mapred.JobClient.getUGI(JobClient.java:679)
          

          Any ideas what this could be?

          Show
          Tom White added a comment - The code looks good, but when I tried running the Hadoop integration test I got: java.io.IOException: Failed to get the current user's information. at org.apache.hadoop.mapred.JobClient.getUGI(JobClient.java:681) at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:429) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:423) at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:410) at org.apache.whirr.service.hadoop.integration.HadoopServiceController.startup(HadoopServiceController.java:87) at org.apache.whirr.service.hadoop.integration.HadoopServiceController.ensureClusterRunning(HadoopServiceController.java:66) at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.setUp(HadoopServiceTest.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:115) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:102) at org.apache.maven.surefire.Surefire.run(Surefire.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:350) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021) Caused by: javax.security.auth.login.LoginException: Login failed: Parameter does contain at least one user name and one group name at org.apache.hadoop.security.UnixUserGroupInformation.readFromConf(UnixUserGroupInformation.java:219) at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:298) at org.apache.hadoop.mapred.JobClient.getUGI(JobClient.java:679) Any ideas what this could be?
          Hide
          Andrei Savu added a comment -

          +1

          Show
          Andrei Savu added a comment - +1
          Tibor Kiss made changes -
          Attachment whirr-168-3.patch [ 12471164 ]
          Hide
          Tibor Kiss added a comment -

          The two issues mentioned by Tom has been fixed.
          The new patch is the whirr-168-3.patch file.

          Show
          Tibor Kiss added a comment - The two issues mentioned by Tom has been fixed. The new patch is the whirr-168-3.patch file.
          Hide
          Tom White added a comment -

          This looks good, +1

          • In HadoopConfigurationConverter#asProperties we shouldn't use Properties#put since it allows you to put an object into a Properties instance, which is unpredictable (see the javadoc for Properties).
          • Why is HadoopNameNodeClusterActionHandler#getConfigDir changed to return a String rather than a File object?
          Show
          Tom White added a comment - This looks good, +1 In HadoopConfigurationConverter#asProperties we shouldn't use Properties#put since it allows you to put an object into a Properties instance, which is unpredictable (see the javadoc for Properties). Why is HadoopNameNodeClusterActionHandler#getConfigDir changed to return a String rather than a File object?
          Tibor Kiss made changes -
          Attachment whirr-168-2.patch [ 12471022 ]
          Hide
          Tibor Kiss added a comment - - edited

          whirr-168-2.patch allow additions of hadoop-client-common, hadoop-client-hdfs and hadoop-client-mapreduce group of properties only to the client

          {core|hdfs|mapred}

          -site.xml

          I didn't run the integration test for this patch, I am willing to run in case that it was reviewed and we see no further modifications necessary.

          Show
          Tibor Kiss added a comment - - edited whirr-168-2.patch allow additions of hadoop-client-common, hadoop-client-hdfs and hadoop-client-mapreduce group of properties only to the client {core|hdfs|mapred} -site.xml I didn't run the integration test for this patch, I am willing to run in case that it was reviewed and we see no further modifications necessary.
          Hide
          Tibor Kiss added a comment -

          In whirr-hadoop-default.properties instead of hadoop-client prefix we may have the following more prefixes:

          hadoop-client-common.
          hadoop-client-hdfs.
          hadoop-client-mapreduce.
          

          In this way we can add extra params to each client

          {core|hdfs|mapred}

          -site.xml.

          Show
          Tibor Kiss added a comment - In whirr-hadoop-default.properties instead of hadoop-client prefix we may have the following more prefixes: hadoop-client-common. hadoop-client-hdfs. hadoop-client-mapreduce. In this way we can add extra params to each client {core|hdfs|mapred} -site.xml.
          Tibor Kiss made changes -
          Attachment whirr-168-1.patch [ 12470951 ]
          Hide
          Tibor Kiss added a comment - - edited

          I extended HadoopConfigurationBuilder to support 'hadoop-client' prefix too.
          In HadoopNameNodeClusterActionHandler#createClientSideProperties then I made the changes to support core-site.xml, mapred-site.xml and hdfs-site.xml instead of the deprecated hadoop-site.xml.

          In the HadoopProxy#getProxyCommand I am using the cluster.getConfiguration().getProperty("hadoop.socks.server") instead of the hardcoded one.

          Tom, could you review the patch? Especially the configuration compositions in HadoopNameNodeClusterActionHandler#createClientSideProperties.

          Show
          Tibor Kiss added a comment - - edited I extended HadoopConfigurationBuilder to support 'hadoop-client' prefix too. In HadoopNameNodeClusterActionHandler#createClientSideProperties then I made the changes to support core-site.xml, mapred-site.xml and hdfs-site.xml instead of the deprecated hadoop-site.xml. In the HadoopProxy#getProxyCommand I am using the cluster.getConfiguration().getProperty("hadoop.socks.server") instead of the hardcoded one. Tom, could you review the patch? Especially the configuration compositions in HadoopNameNodeClusterActionHandler#createClientSideProperties.
          Tibor Kiss made changes -
          Summary Add a new optional c parameter for being able to configure the port of socks connection. Extend client side configurations and support core-site.xml, mapred-site.xml and hdfs-site.xml instead of hadoop-site.xml
          Description We have a generated .whirr/<hadoop-cluster-name>/hadoop-proxy.sh which contains a hard coded port value, the 6666.

          In order to be able to start multiple clusters from the same console I needed a simple mechanism to be able to parametrize this port number.
          Therefore I made a patch which adds the possibility to set this 'whirr.local-socks-proxy-address' to something like
          whirr.local-socks-proxy-address=localhost:6666
          Instead of configuring the port, we are able to configure the address which contains the port.
          (also for the sourcecode, it looks much better to not have such a hardcoded value.)

          In order to run multiple clusters you only need to override this paramter knowing that the default value is localhost:6666
          We have a generated .whirr/<hadoop-cluster-name>/hadoop-proxy.sh which contains a hard coded port value, the 6666.

          In order to be able to start multiple clusters from the same console I needed a simple mechanism to be able to parametrize this port number.
          Therefore is required to extend client side configurations, in the same way as WHIRR-55, to be configurable a 'whirr.hadoop-client.hadoop.socks.server' to something like
          whirr.hadoop-client.hadoop.socks.server=localhost:6667
          The default port will remain of course the 6666.
          Hide
          Tibor Kiss added a comment -

          > We should pull up getConfiguration() to ClusterActionHandlerSupport - with an extra default service Configuration that is passed by the service. However, I don't think that's needed in this patch, so we can do it elsewhere.

          What is in HBaseClusterActionHandler#getConfiguration() had a different approach, not as in WHIRR-55, because in HBase there is only one configuration group, one file. In WHIRR-55 we are able to define defaults for many files. Eventually the HadoopConfigurationBuilder and HadoopConfigurationConverter has to be refactored into a Hadoop specific and the common part moved into hadoop-core module. (at least I am trying to see locally.)
          (I am expecting that soon I will rename the subject of this jira issue.)

          Show
          Tibor Kiss added a comment - > We should pull up getConfiguration() to ClusterActionHandlerSupport - with an extra default service Configuration that is passed by the service. However, I don't think that's needed in this patch, so we can do it elsewhere. What is in HBaseClusterActionHandler#getConfiguration() had a different approach, not as in WHIRR-55 , because in HBase there is only one configuration group, one file. In WHIRR-55 we are able to define defaults for many files. Eventually the HadoopConfigurationBuilder and HadoopConfigurationConverter has to be refactored into a Hadoop specific and the common part moved into hadoop-core module. (at least I am trying to see locally.) (I am expecting that soon I will rename the subject of this jira issue.)
          Hide
          Tibor Kiss added a comment -

          In case of hadoop, instead of having hadoop-site.xml we would need core-site.xml, mapred-site.xml and hdfs-site.xml files. As I mentioned in my last comment, the hadoop command line console is complaining about the deprecation of hadoop-site.xml.

          That is why we should reuse the current HadoopConfigurationBuilder, just making some small changes, to let return Configuration also

          --- a/services/hadoop/src/main/java/org/apache/whirr/service/hadoop/HadoopConfigurationBuilder.java
          +++ b/services/hadoop/src/main/java/org/apache/whirr/service/hadoop/HadoopConfigurationBuilder.java
          @@ -51,23 +51,44 @@ public class HadoopConfigurationBuilder {
           
             public static Statement buildCommon(String path, ClusterSpec clusterSpec,
                 Cluster cluster) throws ConfigurationException, IOException {
          -    Configuration config = buildCommonConfiguration(clusterSpec, cluster,
          -        new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          -    return HadoopConfigurationConverter.asCreateFileStatement(path, config);
          +    return HadoopConfigurationConverter.asCreateFileStatement(path, 
          +        buildCommonConfiguration(clusterSpec, cluster));
             }
             
             public static Statement buildHdfs(String path, ClusterSpec clusterSpec,
                 Cluster cluster) throws ConfigurationException, IOException {
          -    Configuration config = buildHdfsConfiguration(clusterSpec, cluster,
          -        new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          -    return HadoopConfigurationConverter.asCreateFileStatement(path, config);
          +    return HadoopConfigurationConverter.asCreateFileStatement(path, 
          +        buildHdfsConfiguration(clusterSpec, cluster));
             }
             
             public static Statement buildMapReduce(String path, ClusterSpec clusterSpec,
                 Cluster cluster) throws ConfigurationException, IOException {
          -    Configuration config = buildMapReduceConfiguration(clusterSpec, cluster,
          +    return HadoopConfigurationConverter.asCreateFileStatement(path, 
          +        buildMapReduceConfiguration(clusterSpec, cluster));
          +  }
          +  
          +  public static Configuration buildCommonConfiguration(ClusterSpec clusterSpec,
          +      Cluster cluster) throws ConfigurationException, IOException {
          +    return buildCommonConfiguration(clusterSpec, cluster,
          +        new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          +  }
          +
          +  public static Configuration buildHdfsConfiguration(ClusterSpec clusterSpec,
          +      Cluster cluster) throws ConfigurationException, IOException {
          +    return buildHdfsConfiguration(clusterSpec, cluster,
          +        new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          +  }
          +  
          +  public static Configuration buildMapReduceConfiguration(ClusterSpec clusterSpec,
          +      Cluster cluster) throws ConfigurationException, IOException {
          +    return buildMapReduceConfiguration(clusterSpec, cluster,
          +        new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          +  }
          +  
          +  public static Configuration buildClientConfiguration(ClusterSpec clusterSpec,
          +      Cluster cluster) throws ConfigurationException, IOException {
          +    return buildClientConfiguration(clusterSpec, cluster,
                   new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES));
          -    return HadoopConfigurationConverter.asCreateFileStatement(path, config);
             }
             
             @VisibleForTesting
          @@ -102,4 +123,9 @@ public class HadoopConfigurationBuilder {
               return config;
             }
           
          +  @VisibleForTesting
          +  static Configuration buildClientConfiguration(ClusterSpec clusterSpec,
          +      Cluster cluster, Configuration defaults) throws ConfigurationException {
          +    return build(clusterSpec, cluster, defaults, "hadoop-client");
          +  }
           }
          

          then in HadoopNameNodeClusterActionHandler#afterConfigure we can access it

          Configuration coreSiteConf = buildCommonConfiguration(clusterSpec, cluster);
          Configuration hdfsSiteConf = buildHdfsConfiguration(clusterSpec, cluster);
          Configuration mapredSiteConf = buildMapReduceConfiguration(clusterSpec, cluster);
          Configuration clientSiteConf = buildClientConfiguration(clusterSpec, cluster);
          

          then it clientSiteConf with coreSiteConf has to be combined into a composite configuration.
          clientSiteConf is similar with HBaseClusterActionHandler#getConfiguration() composition, but in our case clientSiteConf has to be composed with coreSiteConf too and we will end up on a core-site.xml which contains everything what we have on cluster instances plus in addition what we only need to have in client side.

          The question is that hdfs-site.xml and mapred-site.xml will be the same as on the cluster instances, probably with this change on the client side we will have every properties as on the cluster instances, plus some more. Is this a good approach?
          Currently on the client side we have only a few values, with this approach we will increase a little bit the amount of properties. Is this a problem?

          Show
          Tibor Kiss added a comment - In case of hadoop, instead of having hadoop-site.xml we would need core-site.xml, mapred-site.xml and hdfs-site.xml files. As I mentioned in my last comment, the hadoop command line console is complaining about the deprecation of hadoop-site.xml. That is why we should reuse the current HadoopConfigurationBuilder, just making some small changes, to let return Configuration also --- a/services/hadoop/src/main/java/org/apache/whirr/service/hadoop/HadoopConfigurationBuilder.java +++ b/services/hadoop/src/main/java/org/apache/whirr/service/hadoop/HadoopConfigurationBuilder.java @@ -51,23 +51,44 @@ public class HadoopConfigurationBuilder { public static Statement buildCommon( String path, ClusterSpec clusterSpec, Cluster cluster) throws ConfigurationException, IOException { - Configuration config = buildCommonConfiguration(clusterSpec, cluster, - new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); - return HadoopConfigurationConverter.asCreateFileStatement(path, config); + return HadoopConfigurationConverter.asCreateFileStatement(path, + buildCommonConfiguration(clusterSpec, cluster)); } public static Statement buildHdfs( String path, ClusterSpec clusterSpec, Cluster cluster) throws ConfigurationException, IOException { - Configuration config = buildHdfsConfiguration(clusterSpec, cluster, - new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); - return HadoopConfigurationConverter.asCreateFileStatement(path, config); + return HadoopConfigurationConverter.asCreateFileStatement(path, + buildHdfsConfiguration(clusterSpec, cluster)); } public static Statement buildMapReduce( String path, ClusterSpec clusterSpec, Cluster cluster) throws ConfigurationException, IOException { - Configuration config = buildMapReduceConfiguration(clusterSpec, cluster, + return HadoopConfigurationConverter.asCreateFileStatement(path, + buildMapReduceConfiguration(clusterSpec, cluster)); + } + + public static Configuration buildCommonConfiguration(ClusterSpec clusterSpec, + Cluster cluster) throws ConfigurationException, IOException { + return buildCommonConfiguration(clusterSpec, cluster, + new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); + } + + public static Configuration buildHdfsConfiguration(ClusterSpec clusterSpec, + Cluster cluster) throws ConfigurationException, IOException { + return buildHdfsConfiguration(clusterSpec, cluster, + new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); + } + + public static Configuration buildMapReduceConfiguration(ClusterSpec clusterSpec, + Cluster cluster) throws ConfigurationException, IOException { + return buildMapReduceConfiguration(clusterSpec, cluster, + new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); + } + + public static Configuration buildClientConfiguration(ClusterSpec clusterSpec, + Cluster cluster) throws ConfigurationException, IOException { + return buildClientConfiguration(clusterSpec, cluster, new PropertiesConfiguration(WHIRR_HADOOP_DEFAULT_PROPERTIES)); - return HadoopConfigurationConverter.asCreateFileStatement(path, config); } @VisibleForTesting @@ -102,4 +123,9 @@ public class HadoopConfigurationBuilder { return config; } + @VisibleForTesting + static Configuration buildClientConfiguration(ClusterSpec clusterSpec, + Cluster cluster, Configuration defaults) throws ConfigurationException { + return build(clusterSpec, cluster, defaults, "hadoop-client" ); + } } then in HadoopNameNodeClusterActionHandler#afterConfigure we can access it Configuration coreSiteConf = buildCommonConfiguration(clusterSpec, cluster); Configuration hdfsSiteConf = buildHdfsConfiguration(clusterSpec, cluster); Configuration mapredSiteConf = buildMapReduceConfiguration(clusterSpec, cluster); Configuration clientSiteConf = buildClientConfiguration(clusterSpec, cluster); then it clientSiteConf with coreSiteConf has to be combined into a composite configuration. clientSiteConf is similar with HBaseClusterActionHandler#getConfiguration() composition, but in our case clientSiteConf has to be composed with coreSiteConf too and we will end up on a core-site.xml which contains everything what we have on cluster instances plus in addition what we only need to have in client side. The question is that hdfs-site.xml and mapred-site.xml will be the same as on the cluster instances, probably with this change on the client side we will have every properties as on the cluster instances, plus some more. Is this a good approach? Currently on the client side we have only a few values, with this approach we will increase a little bit the amount of properties. Is this a problem?
          Hide
          Tom White added a comment -

          > I think so, because on the server side we don't want to add a hadoop.socks.server property.

          Yes, I think that's right (I had mistakenly thought that it was also needed server side when I made my comment above). For client side properties, we have a convention of whirr.<service-name> (see HBaseConstants for some examples), so you should follow that for the socks address.

          The NPE is caused because whirr-hadoop-default.properties is not being read in the handler. HBase has some code in HBaseClusterActionHandler#getConfiguration() to combine the cluster spec with the service defaults. I think something similar would work here. In fact, this is exactly what is needed for WHIRR-222, and probably other services. We should pull up getConfiguration() to ClusterActionHandlerSupport - with an extra default service Configuration that is passed by the service. However, I don't think that's needed in this patch, so we can do it elsewhere.

          Show
          Tom White added a comment - > I think so, because on the server side we don't want to add a hadoop.socks.server property. Yes, I think that's right (I had mistakenly thought that it was also needed server side when I made my comment above). For client side properties, we have a convention of whirr.<service-name> (see HBaseConstants for some examples), so you should follow that for the socks address. The NPE is caused because whirr-hadoop-default.properties is not being read in the handler. HBase has some code in HBaseClusterActionHandler#getConfiguration() to combine the cluster spec with the service defaults. I think something similar would work here. In fact, this is exactly what is needed for WHIRR-222 , and probably other services. We should pull up getConfiguration() to ClusterActionHandlerSupport - with an extra default service Configuration that is passed by the service. However, I don't think that's needed in this patch, so we can do it elsewhere.
          Hide
          Tibor Kiss added a comment -

          Looking into the details of HadoopConfigurationBuilder, it would be nice to refactor the client side configuration generation in HadoopNameNodeClusterActionHandler, in way to get rid of hadoop-site.xml because the hadoop client also complains

           WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
          
          Show
          Tibor Kiss added a comment - Looking into the details of HadoopConfigurationBuilder, it would be nice to refactor the client side configuration generation in HadoopNameNodeClusterActionHandler, in way to get rid of hadoop-site.xml because the hadoop client also complains WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core- default .xml, mapred- default .xml and hdfs- default .xml respectively
          Hide
          Tibor Kiss added a comment - - edited

          Hi Tom.
          With the current trunk where we have WHIRR-55 added, I made a trial to add property hadoop-common.hadoop.socks.server to whirr-hadoop-default.properties. Then I changed a line
          at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.createClientSideProperties(HadoopNameNodeClusterActionHandler.java:153)
          into

          config.setProperty("hadoop.socks.server", clusterSpec.getConfiguration().getString("hadoop.socks.server"));
          

          I see in the starting nodes that the new default value is added to hadoop-site.xml, unfortunately when the createClientSideProperties() is to be writing the hadoop-proxy.sh file, the integration test fails with

          -------------------------------------------------------------------------------
          Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest
          -------------------------------------------------------------------------------
          Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 549.834 sec <<< FAILURE!
          org.apache.whirr.service.hadoop.integration.HadoopServiceTest  Time elapsed: 0 sec  <<< ERROR!
          java.lang.NullPointerException
          	at java.util.Hashtable.put(Hashtable.java:394)
          	at java.util.Properties.setProperty(Properties.java:143)
          	at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.createClientSideProperties(HadoopNameNodeClusterActionHandler.java:153)
          	at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.afterConfigure(HadoopNameNodeClusterActionHandler.java:141)
          	at org.apache.whirr.service.ClusterActionHandlerSupport.afterAction(ClusterActionHandlerSupport.java:48)
          	at org.apache.whirr.cluster.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:87)
          	at org.apache.whirr.service.Service.launchCluster(Service.java:79)
          	at org.apache.whirr.service.hadoop.integration.HadoopServiceController.startup(HadoopServiceController.java:81)
          	at org.apache.whirr.service.hadoop.integration.HadoopServiceController.ensureClusterRunning(HadoopServiceController.java:66)
          	at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.setUp(HadoopServiceTest.java:56)
          

          After digging into the how HadoopConfigurationBuilder works, I found that currently it is called only from

          org.apache.whirr.service.hadoop.HadoopDataNodeClusterActionHandler.beforeConfigure(ClusterActionEvent)
          

          and is completely missing the a buildClient(String path, ClusterSpec clusterSpec, Cluster cluster) method from HadoopConfigurationBuilder, we only have buildCommon, buildHdfs, buildMapReduce.

          Similar to HadoopConfigurationBuilderTest do we have to implement that functionality for client. In other words, the current "hadoop-common" prefix from whirr-hadoop-default.properties is to be separated into a "hadoop-client"? I think so, because on the server side we don't want to add a hadoop.socks.server property.
          I am right?

          Show
          Tibor Kiss added a comment - - edited Hi Tom. With the current trunk where we have WHIRR-55 added, I made a trial to add property hadoop-common.hadoop.socks.server to whirr-hadoop-default.properties. Then I changed a line at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.createClientSideProperties(HadoopNameNodeClusterActionHandler.java:153) into config.setProperty( "hadoop.socks.server" , clusterSpec.getConfiguration().getString( "hadoop.socks.server" )); I see in the starting nodes that the new default value is added to hadoop-site.xml, unfortunately when the createClientSideProperties() is to be writing the hadoop-proxy.sh file, the integration test fails with ------------------------------------------------------------------------------- Test set: org.apache.whirr.service.hadoop.integration.HadoopServiceTest ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 549.834 sec <<< FAILURE! org.apache.whirr.service.hadoop.integration.HadoopServiceTest Time elapsed: 0 sec <<< ERROR! java.lang.NullPointerException at java.util.Hashtable.put(Hashtable.java:394) at java.util.Properties.setProperty(Properties.java:143) at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.createClientSideProperties(HadoopNameNodeClusterActionHandler.java:153) at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.afterConfigure(HadoopNameNodeClusterActionHandler.java:141) at org.apache.whirr.service.ClusterActionHandlerSupport.afterAction(ClusterActionHandlerSupport.java:48) at org.apache.whirr.cluster.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:87) at org.apache.whirr.service.Service.launchCluster(Service.java:79) at org.apache.whirr.service.hadoop.integration.HadoopServiceController.startup(HadoopServiceController.java:81) at org.apache.whirr.service.hadoop.integration.HadoopServiceController.ensureClusterRunning(HadoopServiceController.java:66) at org.apache.whirr.service.hadoop.integration.HadoopServiceTest.setUp(HadoopServiceTest.java:56) After digging into the how HadoopConfigurationBuilder works, I found that currently it is called only from org.apache.whirr.service.hadoop.HadoopDataNodeClusterActionHandler.beforeConfigure(ClusterActionEvent) and is completely missing the a buildClient(String path, ClusterSpec clusterSpec, Cluster cluster) method from HadoopConfigurationBuilder, we only have buildCommon, buildHdfs, buildMapReduce. Similar to HadoopConfigurationBuilderTest do we have to implement that functionality for client. In other words, the current "hadoop-common" prefix from whirr-hadoop-default.properties is to be separated into a "hadoop-client"? I think so, because on the server side we don't want to add a hadoop.socks.server property. I am right?
          Hide
          Tom White added a comment -

          Sorry, I meant WHIRR-55 in the previous comment.

          Show
          Tom White added a comment - Sorry, I meant WHIRR-55 in the previous comment.
          Hide
          Tom White added a comment -

          It would definitely be useful to configure the port number.

          Thinking ahead to HADOOP-55 (which I'm still testing), if we call this property hadoop-common.hadoop.socks.server then it will work with the way things are done there. In particular, you don't need to add the localSocksProxyAddress property to ClusterSpec, since you can just get it from ClusterSpec.getConfiguration(). Does this make sense?

          Show
          Tom White added a comment - It would definitely be useful to configure the port number. Thinking ahead to HADOOP-55 (which I'm still testing), if we call this property hadoop-common.hadoop.socks.server then it will work with the way things are done there. In particular, you don't need to add the localSocksProxyAddress property to ClusterSpec, since you can just get it from ClusterSpec.getConfiguration(). Does this make sense?
          Hide
          Andrei Savu added a comment -

          +1 I have only reviewed the patch and run the unit tests.

          Show
          Andrei Savu added a comment - +1 I have only reviewed the patch and run the unit tests.
          Tibor Kiss made changes -
          Assignee Tibor Kiss [ tibor.kiss ]
          Tibor Kiss made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Tibor Kiss added a comment -

          I attached a patch which adds this feature. It was tested on cloud (I am using it for myself controlling 2 clusters) and I also run the integration tests.

          Show
          Tibor Kiss added a comment - I attached a patch which adds this feature. It was tested on cloud (I am using it for myself controlling 2 clusters) and I also run the integration tests.
          Tibor Kiss made changes -
          Attachment local-socks-proxy-address.patch [ 12466642 ]
          Tibor Kiss made changes -
          Field Original Value New Value
          Priority Major [ 3 ] Minor [ 4 ]
          Tibor Kiss created issue -

            People

            • Assignee:
              Tibor Kiss
              Reporter:
              Tibor Kiss
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:

                Development