Whirr
  1. Whirr
  2. WHIRR-612

CDH4 can be installed on Ubuntu now as well as CentOS

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.8.0
    • Component/s: service/cdh
    • Labels:
      None

      Description

      CDH4 beta 1 was only available on CentOS, but from beta 2 onward, CDH4 has been available on Ubuntu et al. So we should remove the hardcoding in tests and recipes for centos.

      1. cdh-yarn-cloudservers-us.txt
        77 kB
        Adrian Cole
      2. cdh-yarn-rackspace-cloudservers-us.txt
        95 kB
        Adrian Cole
      3. WHIRR-612.patch
        7 kB
        Andrew Bayer
      4. WHIRR-612.patch
        7 kB
        Andrew Bayer
      5. WHIRR-612.patch
        11 kB
        Andrew Bayer
      6. WHIRR-612.patch
        6 kB
        Andrew Bayer
      7. WHIRR-612.patch
        4 kB
        Andrew Bayer

        Issue Links

          Activity

          Andrew Bayer created issue -
          Hide
          Andrew Bayer added a comment -

          Patch with tests and recipe changed.

          Show
          Andrew Bayer added a comment - Patch with tests and recipe changed.
          Andrew Bayer made changes -
          Field Original Value New Value
          Attachment WHIRR-612.patch [ 12539114 ]
          Andrew Bayer made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Adrian Cole added a comment -

          looks like troubles in jdk install script at least.

          Show
          Adrian Cole added a comment - looks like troubles in jdk install script at least.
          Adrian Cole made changes -
          Attachment cdh-yarn-cloudservers-us.txt [ 12539208 ]
          Attachment cdh-yarn-rackspace-cloudservers-us.txt [ 12539209 ]
          Adrian Cole made changes -
          Link This issue is blocked by WHIRR-610 [ WHIRR-610 ]
          Hide
          Adrian Cole added a comment -

          I suspect this is related to the failure

          Show
          Adrian Cole added a comment - I suspect this is related to the failure
          Adrian Cole made changes -
          Link This issue is blocked by WHIRR-580 [ WHIRR-580 ]
          Hide
          Adrian Cole added a comment -

          retry_yum is also a failure blocking this:

          + retry_yum install -y hadoop-yarn-nodemanager
          /tmp/configure-hadoop-datanode_yarn-nodemanager/configure-hadoop-datanode_yarn-nodemanager.sh: line 222: retry_yum: command not found
          + service hadoop-yarn-nodemanager start
          hadoop-yarn-nodemanager: unrecognized service

          Show
          Adrian Cole added a comment - retry_yum is also a failure blocking this: + retry_yum install -y hadoop-yarn-nodemanager /tmp/configure-hadoop-datanode_yarn-nodemanager/configure-hadoop-datanode_yarn-nodemanager.sh: line 222: retry_yum: command not found + service hadoop-yarn-nodemanager start hadoop-yarn-nodemanager: unrecognized service
          Adrian Cole made changes -
          Link This issue is blocked by WHIRR-580 [ WHIRR-580 ]
          Adrian Cole made changes -
          Link This issue is blocked by WHIRR-528 [ WHIRR-528 ]
          Hide
          Adrian Cole added a comment -

          looks like yarn is using retry_yum, which hasn't yet been ported to trunk!!

          Show
          Adrian Cole added a comment - looks like yarn is using retry_yum, which hasn't yet been ported to trunk!!
          Hide
          Andrew Bayer added a comment -

          So are we good now that WHIRR-528 is in?

          Show
          Andrew Bayer added a comment - So are we good now that WHIRR-528 is in?
          Hide
          Adrian Cole added a comment -

          so one last glitch. While I can start a cluster with CdhYarnServiceTest, it looks like our test code is using the wrong version

          org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate with client version 4
          at org.apache.hadoop.ipc.Client.call(Client.java:1066)
          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
          at $Proxy90.getProtocolVersion(Unknown Source)
          at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
          at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
          at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
          at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
          at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
          at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
          at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
          at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
          at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
          at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
          at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122)
          at org.apache.whirr.service.cdh.integration.CdhHadoopServiceTest.checkHadoop(CdhHadoopServiceTest.java:127)
          at org.apache.whirr.service.cdh.integration.CdhHadoopServiceTest.testJobExecution(CdhHadoopServiceTest.java:123)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
          at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
          at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
          at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
          at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

          Show
          Adrian Cole added a comment - so one last glitch. While I can start a cluster with CdhYarnServiceTest, it looks like our test code is using the wrong version org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate with client version 4 at org.apache.hadoop.ipc.Client.call(Client.java:1066) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy90.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122) at org.apache.whirr.service.cdh.integration.CdhHadoopServiceTest.checkHadoop(CdhHadoopServiceTest.java:127) at org.apache.whirr.service.cdh.integration.CdhHadoopServiceTest.testJobExecution(CdhHadoopServiceTest.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
          Hide
          Andrew Bayer added a comment -

          Tom? =

          Show
          Andrew Bayer added a comment - Tom? =
          Hide
          Adrian Cole added a comment -

          I also get this from CdhHadoopServiceTest (running from cloudservers-uk, but that shouldn't matter)

          Show
          Adrian Cole added a comment - I also get this from CdhHadoopServiceTest (running from cloudservers-uk, but that shouldn't matter)
          Hide
          Tom White added a comment -

          I tried to reproduce this but I'm not able to provision instances on the connection I'm on for some reason.

          The dependencies look right on the client side from doing a 'mvn dependency:tree | grep hadoop' - or maybe we need to upgrade to 2.0.0-cdh4.0.1.

          Show
          Tom White added a comment - I tried to reproduce this but I'm not able to provision instances on the connection I'm on for some reason. The dependencies look right on the client side from doing a 'mvn dependency:tree | grep hadoop' - or maybe we need to upgrade to 2.0.0-cdh4.0.1.
          Hide
          Andrew Bayer added a comment -

          Yup, that was needed. Fixed patch up.

          Show
          Andrew Bayer added a comment - Yup, that was needed. Fixed patch up.
          Andrew Bayer made changes -
          Attachment WHIRR-612.patch [ 12540954 ]
          Hide
          Andrew Bayer added a comment -

          Found another thing I'd liked tweaked - we don't have 32 bit CDH4 packages anywhere but CentOS 6, so if we're not explicitly supplying an image ID, we should have the auto-logic choose a 64 bit image. So I added an option in ClusterSpec to force 64 bit with corresponding change in TemplateBuilderStrategy, and the related test properties files have the field set now. I also fixed a boo-boo in HBase package installation.

          Show
          Andrew Bayer added a comment - Found another thing I'd liked tweaked - we don't have 32 bit CDH4 packages anywhere but CentOS 6, so if we're not explicitly supplying an image ID, we should have the auto-logic choose a 64 bit image. So I added an option in ClusterSpec to force 64 bit with corresponding change in TemplateBuilderStrategy, and the related test properties files have the field set now. I also fixed a boo-boo in HBase package installation.
          Andrew Bayer made changes -
          Attachment WHIRR-612.patch [ 12540961 ]
          Hide
          Tom White added a comment -

          When running CdhHadoopServiceTest you need to pass in -Dmr1=true since its test classpath is different. E.g.

          mvn verify -Pintegration -DargLine="-Dwhirr.test.provider=aws-ec2 \
          -Dwhirr.test.identity=$AWS_ACCESS_KEY_ID \
          -Dwhirr.test.credential=$AWS_SECRET_ACCESS_KEY \
          -Dconfig=.whirr-test.properties" -Dit.test=CdhHadoopServiceTest \
          -Dmr1=true
          

          Andrew has had CdhYarnServiceTest passing on AWS but there was a SOCKS error on Rackspace.

          Show
          Tom White added a comment - When running CdhHadoopServiceTest you need to pass in -Dmr1=true since its test classpath is different. E.g. mvn verify -Pintegration -DargLine="-Dwhirr.test.provider=aws-ec2 \ -Dwhirr.test.identity=$AWS_ACCESS_KEY_ID \ -Dwhirr.test.credential=$AWS_SECRET_ACCESS_KEY \ -Dconfig=.whirr-test.properties" -Dit.test=CdhHadoopServiceTest \ -Dmr1=true Andrew has had CdhYarnServiceTest passing on AWS but there was a SOCKS error on Rackspace.
          Hide
          Tom White added a comment -

          Not sure if it's the problem with the YARN test yet, but whirr.env.mapreduce_version should be replaced with whirr.env.MAPREDUCE_VERSION.

          Show
          Tom White added a comment - Not sure if it's the problem with the YARN test yet, but whirr.env.mapreduce_version should be replaced with whirr.env.MAPREDUCE_VERSION.
          Hide
          Adrian Cole added a comment -

          agreed

          Show
          Adrian Cole added a comment - agreed
          Hide
          Tom White added a comment -

          The YARN test is failing on Rackspace with

          2012-08-15 12:25:24,529 INFO  ipc.Client (Client.java:handleConnectionFailure(683)) - Retrying connect to server: 67-207-153-65.static.cloud-ips.com/67.207.153.65:8040. Already tried 9 time(s).
          java.lang.reflect.UndeclaredThrowableException
          	at org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
          	at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:134)
          	at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:181)
          	at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:214)
          	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:338)
          	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1226)
          	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1223)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:396)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
          	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1223)
          	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244)
          	at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          	at java.lang.reflect.Method.invoke(Method.java:597)
          	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
          	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
          	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          	at java.lang.reflect.Method.invoke(Method.java:597)
          	at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
          Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed on local exception: java.net.SocketException: Malformed reply from SOCKS server; Host Details : local host is: "Clouderas-MacBook-Pro-3.local/192.168.0.188"; destination host is: "67-207-153-65.static.cloud-ips.com":8040; 
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:187)
          	at $Proxy10.getNewApplication(Unknown Source)
          	at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:132)
          	... 23 more
          Caused by: java.io.IOException: Failed on local exception: java.net.SocketException: Malformed reply from SOCKS server; Host Details : local host is: "Clouderas-MacBook-Pro-3.local/192.168.0.188"; destination host is: "67-207-153-65.static.cloud-ips.com":8040; 
          	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1165)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
          	... 25 more
          Caused by: java.net.SocketException: Malformed reply from SOCKS server
          	at java.net.SocksSocketImpl.readSocksReply(SocksSocketImpl.java:147)
          	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:538)
          	at java.net.Socket.connect(Socket.java:529)
          	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:522)
          	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
          	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:472)
          	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:566)
          	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:215)
          	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1271)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1141)
          	... 26 more
          

          Port 8040 is the Reource Manager address, which the client talks to. The problem is that the client tries to talk to the public hostname (e.g. 67-207-153-65.static.cloud-ips.com) which is resolved on the head node (over the SSH SOCKS tunnel) to the public IP address 67.207.153.65. However, the localizer is only listening on the private address so we get a connection refused.

          In early versions of YARN the RM would listen on all interfaces, however MAPREDUCE-4163 changed the behaviour to listen on a single interface.

          I think this works on AWS since the public hostname is resolved to the private IP since the resolution happens on the cluster. Rackspace doesn't have this behaviour, so it fails.

          Since this is a limitation in YARN (and it cannot be overidden as far as I can see) I think we should ship 0.8.0 with this known issue, while we work out how to get YARN to work on Rackspace for a later release. (This will probably require fixes in YARN and Whirr.) We should commit this patch along with the whirr.env.MAPREDUCE_VERSION thing. Does that sound reasonable?

          Show
          Tom White added a comment - The YARN test is failing on Rackspace with 2012-08-15 12:25:24,529 INFO ipc.Client (Client.java:handleConnectionFailure(683)) - Retrying connect to server: 67-207-153-65.static.cloud-ips.com/67.207.153.65:8040. Already tried 9 time(s). java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135) at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:134) at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:181) at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:214) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:338) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1226) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1223) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244) at org.apache.hadoop.examples.WordCount.main(WordCount.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed on local exception: java.net.SocketException: Malformed reply from SOCKS server; Host Details : local host is: "Clouderas-MacBook-Pro-3.local/192.168.0.188"; destination host is: "67-207-153-65.static.cloud-ips.com":8040; at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:187) at $Proxy10.getNewApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:132) ... 23 more Caused by: java.io.IOException: Failed on local exception: java.net.SocketException: Malformed reply from SOCKS server; Host Details : local host is: "Clouderas-MacBook-Pro-3.local/192.168.0.188"; destination host is: "67-207-153-65.static.cloud-ips.com":8040; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765) at org.apache.hadoop.ipc.Client.call(Client.java:1165) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184) ... 25 more Caused by: java.net.SocketException: Malformed reply from SOCKS server at java.net.SocksSocketImpl.readSocksReply(SocksSocketImpl.java:147) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:538) at java.net.Socket.connect(Socket.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:522) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:472) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:566) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:215) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1271) at org.apache.hadoop.ipc.Client.call(Client.java:1141) ... 26 more Port 8040 is the Reource Manager address, which the client talks to. The problem is that the client tries to talk to the public hostname (e.g. 67-207-153-65.static.cloud-ips.com) which is resolved on the head node (over the SSH SOCKS tunnel) to the public IP address 67.207.153.65. However, the localizer is only listening on the private address so we get a connection refused. In early versions of YARN the RM would listen on all interfaces, however MAPREDUCE-4163 changed the behaviour to listen on a single interface. I think this works on AWS since the public hostname is resolved to the private IP since the resolution happens on the cluster. Rackspace doesn't have this behaviour, so it fails. Since this is a limitation in YARN (and it cannot be overidden as far as I can see) I think we should ship 0.8.0 with this known issue, while we work out how to get YARN to work on Rackspace for a later release. (This will probably require fixes in YARN and Whirr.) We should commit this patch along with the whirr.env.MAPREDUCE_VERSION thing. Does that sound reasonable?
          Hide
          Tom White added a comment -

          Hmm I may be wrong about not being able to override the bind address. Looking into it more.

          Show
          Tom White added a comment - Hmm I may be wrong about not being able to override the bind address. Looking into it more.
          Hide
          Andrew Bayer added a comment -

          Final patch.

          Show
          Andrew Bayer added a comment - Final patch.
          Andrew Bayer made changes -
          Attachment WHIRR-612.patch [ 12541098 ]
          Hide
          Andrew Bayer added a comment -

          Remember how I said "final patch"? I lied. This one has s/mapreduce_version/MAPREDUCE_VERSION/ too.

          Show
          Andrew Bayer added a comment - Remember how I said "final patch"? I lied. This one has s/mapreduce_version/MAPREDUCE_VERSION/ too.
          Andrew Bayer made changes -
          Attachment WHIRR-612.patch [ 12541105 ]
          Hide
          Tom White added a comment -

          +1. I'll create another JIRA for the YARN fixes.

          Show
          Tom White added a comment - +1. I'll create another JIRA for the YARN fixes.
          Hide
          Andrew Bayer added a comment -

          Committed.

          Show
          Andrew Bayer added a comment - Committed.
          Andrew Bayer made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Resolution Fixed [ 1 ]
          Hide
          Tom White added a comment -

          I opened WHIRR-629 for the YARN on Rackspace bug.

          Show
          Tom White added a comment - I opened WHIRR-629 for the YARN on Rackspace bug.

            People

            • Assignee:
              Andrew Bayer
              Reporter:
              Andrew Bayer
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development