Bigtop
  1. Bigtop
  2. BIGTOP-1178

Clusterize the puppetized vagrant deployer.

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.7.0
    • Fix Version/s: 0.8.0
    • Component/s: Deployment
    • Labels:
      None

      Description

      Now that BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

      The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

       if ARGV[1] == 'cluster'
              cluster = true
          else
              cluster = false
          end
      

      I think the Vagrantfile can look something like this:

      Vagrant.configure(VAGRANT_API_VERSION) do |config| 
      
          # Head node   
          config.vm.define :bigtop1 do |bigtop1|
           
         # If "cluster" then add a bunch of slaves also 
         if cluster == true
                  # Slave nodes .... 
                  config.vm.define :bigtop2 do |bigtop2|
                  config.vm.define :bigtop3 do |bigtop3|
                  config.vm.define :bigtop4 do |bigtop4|
      
      
      1. BIGTOP-1178.1.patch
        8 kB
        Evans Ye
      2. BIGTOP-1178.2.patch
        13 kB
        Evans Ye
      3. BIGTOP-1178.3.patch
        13 kB
        Evans Ye

        Activity

        jay vyas created issue -
        jay vyas made changes -
        Field Original Value New Value
        Description Now that https://issues.apache.org/jira/browse/BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
            
            # Slave nodes ....
                    config.vm.define :bigtop3 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        Now that https://issues.apache.org/jira/browse/BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
             
           # If "cluster" then add a bunch of slaves also
           if cluster == true
                    # Slave nodes ....
                    config.vm.define :bigtop3 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        jay vyas made changes -
        Description Now that https://issues.apache.org/jira/browse/BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
             
           # If "cluster" then add a bunch of slaves also
           if cluster == true
                    # Slave nodes ....
                    config.vm.define :bigtop3 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        Now that https://issues.apache.org/jira/browse/BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
             
           # If "cluster" then add a bunch of slaves also
           if cluster == true
                    # Slave nodes ....
                    config.vm.define :bigtop2 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        Hide
        Evans Ye added a comment -

        Hello jay, me again.
        Are you currently working on this?
        Cause I've been trying to develop one which meet the criteria you listed.
        Anyway I can still do the test like I said before

        Show
        Evans Ye added a comment - Hello jay, me again. Are you currently working on this? Cause I've been trying to develop one which meet the criteria you listed. Anyway I can still do the test like I said before
        Hide
        Evans Ye added a comment -

        Good News!
        I've worked out the first version of the patch.
        In this patch I use Vagrant Host Manager to manage VMs' /etc/hosts to ensure that every one in cluster know each other.
        A simple shell wrapper startup.sh is used to spinning up VM(s) and specify which mode of deployment is going to be chose, could be standalone mode or cluster mode.
        I've also wrote down steps to use in a README.md which shipped with the patch.
        Please give me any suggestion or feedback when something pop up in your mind, thanks

        Show
        Evans Ye added a comment - Good News! I've worked out the first version of the patch. In this patch I use Vagrant Host Manager to manage VMs' /etc/hosts to ensure that every one in cluster know each other. A simple shell wrapper startup.sh is used to spinning up VM(s) and specify which mode of deployment is going to be chose, could be standalone mode or cluster mode. I've also wrote down steps to use in a README.md which shipped with the patch. Please give me any suggestion or feedback when something pop up in your mind, thanks
        Evans Ye made changes -
        Attachment BIGTOP-1178.1.patch [ 12630983 ]
        Evans Ye made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        jay vyas added a comment -

        Hi evans thanks alot ! I just saw your above comment. No I havent started looking at this . but i will gladly test your patch for you. will be exciting to run a distributed cluster.

        Show
        jay vyas added a comment - Hi evans thanks alot ! I just saw your above comment. No I havent started looking at this . but i will gladly test your patch for you. will be exciting to run a distributed cluster.
        Hide
        Bruno Mahé added a comment -

        Looks neat!
        Will try to test that in the coming days, but I can't make any promise

        Some notes:

        • Files are missing Apache License
        • s/hadoop/Apache Hadoop/g s/hbase/Apache HBase/g s/bigtop/Apache Bigtop/g ...
        Show
        Bruno Mahé added a comment - Looks neat! Will try to test that in the coming days, but I can't make any promise Some notes: Files are missing Apache License s/hadoop/Apache Hadoop/g s/hbase/Apache HBase/g s/bigtop/Apache Bigtop/g ...
        Evans Ye made changes -
        Attachment BIGTOP-1178.2.patch [ 12633094 ]
        Hide
        Evans Ye added a comment -

        Thank you Bruno Mahé.
        It's so good to have your remarkable feedback.
        I've uploaded a new patch based on your notes.
        Please give my any comment if something still can be improved.
        thanks

        Show
        Evans Ye added a comment - Thank you Bruno Mahé . It's so good to have your remarkable feedback. I've uploaded a new patch based on your notes. Please give my any comment if something still can be improved. thanks
        Hide
        jay vyas added a comment - - edited

        Hey evans ! FYI FINALLY im testing this out. Will let you know how it goes ! sorry it took so long, had to close some other open tickets because i keep VMs open for my JIRAs

        Specifically, i think maybe we can use it to try and reproduce BIGTOP-1237 (the HA FUSE mount bug)

        Show
        jay vyas added a comment - - edited Hey evans ! FYI FINALLY im testing this out. Will let you know how it goes ! sorry it took so long, had to close some other open tickets because i keep VMs open for my JIRAs Specifically, i think maybe we can use it to try and reproduce BIGTOP-1237 (the HA FUSE mount bug)
        Hide
        jay vyas added a comment -

        Okay, it works ! +1

        I tested this by doing :

        0) install the plugin as per your README
        1) vagrant up
        2) "vagrant ssh" into bigtop1
        3) "vagrant ssh" into bigtop2
        4) creating a hbase table "t2" in bigtop2
        5) listing contents in hbase table FROM bigtop1.

        And so its pretty clear that, at least, hbase is working on both servers .

        So, +1 from me, but would like feedback from others - any other tests required?

        I think after this, lets automate some running of smoke tests on the VMs so that we dont have to do manual testing of the vagrant patches anymore.

        Two minor comments:

        • looks like the default mode is clusterized. Id suggest that maybe we have cluster be an option, or do you think it should be the default? Starting a 4 node cluster takes a lot longer than just a one node. Can you implement some logic that turns clustering on/off , defaulting to "off" ?
        • There is a whitespace warning when applying the patch
        [vagrant@bigtop1 ~]$ hbase shell -d <<EOF
        > scan 't2'
        > EOF
        Setting DEBUG log level...
        14/03/17 21:53:31 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
        HBase Shell; enter 'help<RETURN>' for list of supported commands.
        Type "exit<RETURN>" to leave the HBase Shell
        Version 0.94.12, rUnknown, Thu Oct 31 04:44:54 EDT 2013
        
        scan 't2'
        ROW                                        COLUMN+CELL                                                                                                              
        14/03/17 21:53:34 DEBUG zookeeper.ZKUtil: hconnection opening connection to ZooKeeper with ensemble (bigtop1.vagrant:2181)
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5--1, built on 10/31/2013 08:19 GMT
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:host.name=bigtop1.vagrant
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_51
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hadoop/conf:/etc/hbase/conf:/usr/lib/jvm/java-openjdk/lib/tools.jar:/usr/lib/hbase:/usr/lib/hbase/hbase-0.94.12.jar:/usr/lib/hbase/hbase-0.94.12-tests.jar:/usr/lib/hbase/hbase.jar:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/aopalliance-1.0.jar:/usr/lib/hbase/lib/asm-3.1.jar:/usr/lib/hbase/lib/avro-1.5.3.jar:/usr/lib/hbase/lib/avro-ipc-1.5.3.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.4.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.13.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.1.jar:/usr/lib/hbase/lib/commons-lang-2.5.jar:/usr/lib/hbase/lib/commons-logging-1.1.1.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/usr/lib/hbase/lib/grizzly-framework-2.1.1.jar:/usr/lib/hbase/lib/grizzly-framework-2.1.1-tests.jar:/usr/lib/hbase/lib/grizzly-http-2.1.1.jar:/usr/lib/hbase/lib/grizzly-http-server-2.1.1.jar:/usr/lib/hbase/lib/grizzly-http-servlet-2.1.1.jar:/usr/lib/hbase/lib/grizzly-rcm-2.1.1.jar:/usr/lib/hbase/lib/guava-11.0.2.jar:/usr/lib/hbase/lib/guice-3.0.jar:/usr/lib/hbase/lib/guice-servlet-3.0.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/httpclient-4.1.2.jar:/usr/lib/hbase/lib/httpcore-4.1.3.jar:/usr/lib/hbase/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hbase/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-xc-1.8.8.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/javax.inject-1.jar:/usr/lib/hbase/lib/javax.servlet-3.0.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jersey-client-1.8.jar:/usr/lib/hbase/lib/jersey-core-1.8.jar:/usr/lib/hbase/lib/jersey-grizzly2-1.8.jar:/usr/lib/hbase/lib/jersey-guice-1.8.jar:/usr/lib/hbase/lib/jersey-json-1.8.jar:/usr/lib/hbase/lib/jersey-server-1.8.jar:/usr/lib/hbase/lib/jersey-test-framework-core-1.8.jar:/usr/lib/hbase/lib/jersey-test-framework-grizzly2-1.8.jar:/usr/lib/hbase/lib/jets3t-0.6.1.jar:/usr/lib/hbase/lib/jettison-1.1.jar:/usr/lib/hbase/lib/jetty-6.1.26.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.jar:/usr/lib/hbase/lib/jruby-complete-1.6.5.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.10-HBASE-1.jar:/usr/lib/hbase/lib/kfs-0.3.jar:/usr/lib/hbase/lib/libthrift-0.8.0.jar:/usr/lib/hbase/lib/log4j-1.2.16.jar:/usr/lib/hbase/lib/management-api-3.0.0-b012.jar:/usr/lib/hbase/lib/metrics-core-2.1.2.jar:/usr/lib/hbase/lib/netty-3.2.4.Final.jar:/usr/lib/hbase/lib/netty-3.5.11.Final.jar:/usr/lib/hbase/lib/protobuf-java-2.4.0a.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.6.1.jar:/usr/lib/hbase/lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase/lib/stax-api-1.0.1.jar:/usr/lib/hbase/lib/velocity-1.7.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/zookeeper.jar:/etc/hadoop/conf:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-common-2.0.6-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.6-alpha.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-common-2.0.6-alpha.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.6-alpha.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.6-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.6-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.6-alpha.jar
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.name=vagrant
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/vagrant
        14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/vagrant
        14/03/17 21:53:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bigtop1.vagrant:2181 sessionTimeout=180000 watcher=hconnection
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false
        14/03/17 21:53:35 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9877@bigtop1.vagrant
        14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Opening socket connection to server bigtop1.vagrant/10.10.10.12:2181. Will not attempt to authenticate using SASL (unknown error)
        14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Socket connection established to bigtop1.vagrant/10.10.10.12:2181, initiating session
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Session establishment request sent on bigtop1.vagrant/10.10.10.12:2181
        14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Session establishment complete on server bigtop1.vagrant/10.10.10.12:2181, sessionid = 0x144d1c2d6dc0010, negotiated timeout = 40000
        14/03/17 21:53:35 DEBUG zookeeper.ZooKeeperWatcher: hconnection Received ZooKeeper Event, type=None, state=SyncConnected, path=null
        14/03/17 21:53:35 DEBUG zookeeper.ZooKeeperWatcher: hconnection-0x144d1c2d6dc0010 connected
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,87,0  request:: '/hbase/hbaseid,F  response:: s{13,13,1395088901425,1395088901425,0,0,0,0,62,0,13} 
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,87,0  request:: '/hbase/hbaseid,F  response:: #ffffffff00015313231333940626967746f70312e76616772616e7462373539303362332d363166322d343432322d393935382d343162653135643061643465,s{13,13,1395088901425,1395088901425,0,0,0,0,62,0,13} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 36 byte(s) of data from znode /hbase/hbaseid; data=b75903b3-61f2-4422-9958-41be1...
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 3,3  replyHeader:: 3,87,0  request:: '/hbase/master,T  response:: s{11,11,1395088898894,1395088898894,0,0,0,91428527212855297,63,0,11} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Set watcher on existing znode /hbase/master
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 4,4  replyHeader:: 4,87,0  request:: '/hbase/master,T  response:: #ffffffff00015313231333940626967746f70312e76616772616e7400626967746f70312e76616772616e742c36303030302c31333935303838383936373336,s{11,11,1395088898894,1395088898894,0,0,0,91428527212855297,63,0,11} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 37 byte(s) of data from znode /hbase/master and set watcher; \x00\x00bigtop1.vagrant,60000...
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 5,3  replyHeader:: 5,87,0  request:: '/hbase/root-region-server,T  response:: s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Set watcher on existing znode /hbase/root-region-server
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,87,0  request:: '/hbase/root-region-server,T  response:: #ffffffff00015313137323040626967746f70312e76616772616e74626967746f70312e76616772616e742c36303032302c31333935303838383839383936,s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 35 byte(s) of data from znode /hbase/root-region-server and set watcher; bigtop1.vagrant,60020,1395088...
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 7,3  replyHeader:: 7,87,0  request:: '/hbase,F  response:: s{3,3,1395088897993,1395088897993,0,12,0,0,0,12,31} 
        14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 8,4  replyHeader:: 8,87,0  request:: '/hbase/root-region-server,T  response:: #ffffffff00015313137323040626967746f70312e76616772616e74626967746f70312e76616772616e742c36303032302c31333935303838383839383936,s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 
        14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 35 byte(s) of data from znode /hbase/root-region-server and set watcher; bigtop1.vagrant,60020,1395088...
        14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1e04d31e; serverName=bigtop1.vagrant,60020,1395088889896
        14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is bigtop1.vagrant:60020
        14/03/17 21:53:35 DEBUG client.MetaScanner: Scanning .META. starting at row=t2,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1e04d31e
        14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for t2,,1395092997797.5fa8b01d30eb2bc84fa91a624a2606eb. is 10.10.10.13:60020
        14/03/17 21:53:35 DEBUG client.ClientScanner: Creating scanner over t2 starting at key ''
        14/03/17 21:53:35 DEBUG client.ClientScanner: Advancing internal scanner to startKey at ''
         row2                                      column=f2:a, timestamp=1395092999169, value=val2                                                                         
        14/03/17 21:53:35 DEBUG client.ClientScanner: Finished with scanning at {NAME => 't2,,1395092997797.5fa8b01d30eb2bc84fa91a624a2606eb.', STARTKEY => '', ENDKEY => '', ENCODED => 5fa8b01d30eb2bc84fa91a624a2606eb,}
        1 row(s) in 1.7930 seconds
        
        [vagrant@bigtop1 ~]$ 
        
        Show
        jay vyas added a comment - Okay, it works ! +1 I tested this by doing : 0) install the plugin as per your README 1) vagrant up 2) "vagrant ssh" into bigtop1 3) "vagrant ssh" into bigtop2 4) creating a hbase table "t2" in bigtop2 5) listing contents in hbase table FROM bigtop1. And so its pretty clear that, at least, hbase is working on both servers . So, +1 from me, but would like feedback from others - any other tests required? I think after this, lets automate some running of smoke tests on the VMs so that we dont have to do manual testing of the vagrant patches anymore. Two minor comments: looks like the default mode is clusterized. Id suggest that maybe we have cluster be an option, or do you think it should be the default? Starting a 4 node cluster takes a lot longer than just a one node. Can you implement some logic that turns clustering on/off , defaulting to "off" ? There is a whitespace warning when applying the patch [vagrant@bigtop1 ~]$ hbase shell -d <<EOF > scan 't2' > EOF Setting DEBUG log level... 14/03/17 21:53:31 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.94.12, rUnknown, Thu Oct 31 04:44:54 EDT 2013 scan 't2' ROW COLUMN+CELL 14/03/17 21:53:34 DEBUG zookeeper.ZKUtil: hconnection opening connection to ZooKeeper with ensemble (bigtop1.vagrant:2181) 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5--1, built on 10/31/2013 08:19 GMT 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:host.name=bigtop1.vagrant 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_51 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hadoop/conf:/etc/hbase/conf:/usr/lib/jvm/java-openjdk/lib/tools.jar:/usr/lib/hbase:/usr/lib/hbase/hbase-0.94.12.jar:/usr/lib/hbase/hbase-0.94.12-tests.jar:/usr/lib/hbase/hbase.jar:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/aopalliance-1.0.jar:/usr/lib/hbase/lib/asm-3.1.jar:/usr/lib/hbase/lib/avro-1.5.3.jar:/usr/lib/hbase/lib/avro-ipc-1.5.3.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.4.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.13.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.1.jar:/usr/lib/hbase/lib/commons-lang-2.5.jar:/usr/lib/hbase/lib/commons-logging-1.1.1.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/usr/lib/hbase/lib/grizzly-framework-2.1.1.jar:/usr/lib/hbase/lib/grizzly-framework-2.1.1-tests.jar:/usr/lib/hbase/lib/grizzly-http-2.1.1.jar:/usr/lib/hbase/lib/grizzly-http-server-2.1.1.jar:/usr/lib/hbase/lib/grizzly-http-servlet-2.1.1.jar:/usr/lib/hbase/lib/grizzly-rcm-2.1.1.jar:/usr/lib/hbase/lib/guava-11.0.2.jar:/usr/lib/hbase/lib/guice-3.0.jar:/usr/lib/hbase/lib/guice-servlet-3.0.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/httpclient-4.1.2.jar:/usr/lib/hbase/lib/httpcore-4.1.3.jar:/usr/lib/hbase/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hbase/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hbase/lib/jackson-xc-1.8.8.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/javax.inject-1.jar:/usr/lib/hbase/lib/javax.servlet-3.0.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jersey-client-1.8.jar:/usr/lib/hbase/lib/jersey-core-1.8.jar:/usr/lib/hbase/lib/jersey-grizzly2-1.8.jar:/usr/lib/hbase/lib/jersey-guice-1.8.jar:/usr/lib/hbase/lib/jersey-json-1.8.jar:/usr/lib/hbase/lib/jersey-server-1.8.jar:/usr/lib/hbase/lib/jersey-test-framework-core-1.8.jar:/usr/lib/hbase/lib/jersey-test-framework-grizzly2-1.8.jar:/usr/lib/hbase/lib/jets3t-0.6.1.jar:/usr/lib/hbase/lib/jettison-1.1.jar:/usr/lib/hbase/lib/jetty-6.1.26.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.jar:/usr/lib/hbase/lib/jruby-complete-1.6.5.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.10-HBASE-1.jar:/usr/lib/hbase/lib/kfs-0.3.jar:/usr/lib/hbase/lib/libthrift-0.8.0.jar:/usr/lib/hbase/lib/log4j-1.2.16.jar:/usr/lib/hbase/lib/management-api-3.0.0-b012.jar:/usr/lib/hbase/lib/metrics-core-2.1.2.jar:/usr/lib/hbase/lib/netty-3.2.4.Final.jar:/usr/lib/hbase/lib/netty-3.5.11.Final.jar:/usr/lib/hbase/lib/protobuf-java-2.4.0a.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.6.1.jar:/usr/lib/hbase/lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase/lib/stax-api-1.0.1.jar:/usr/lib/hbase/lib/velocity-1.7.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/zookeeper.jar:/etc/hadoop/conf:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-common-2.0.6-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.6-alpha.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-common-2.0.6-alpha.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.6-alpha.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.6-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.6-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.6-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.6-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.6-alpha.jar 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.name=vagrant 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/vagrant 14/03/17 21:53:34 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/vagrant 14/03/17 21:53:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bigtop1.vagrant:2181 sessionTimeout=180000 watcher=hconnection 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false 14/03/17 21:53:35 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9877@bigtop1.vagrant 14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Opening socket connection to server bigtop1.vagrant/10.10.10.12:2181. Will not attempt to authenticate using SASL (unknown error) 14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Socket connection established to bigtop1.vagrant/10.10.10.12:2181, initiating session 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Session establishment request sent on bigtop1.vagrant/10.10.10.12:2181 14/03/17 21:53:35 INFO zookeeper.ClientCnxn: Session establishment complete on server bigtop1.vagrant/10.10.10.12:2181, sessionid = 0x144d1c2d6dc0010, negotiated timeout = 40000 14/03/17 21:53:35 DEBUG zookeeper.ZooKeeperWatcher: hconnection Received ZooKeeper Event, type=None, state=SyncConnected, path=null 14/03/17 21:53:35 DEBUG zookeeper.ZooKeeperWatcher: hconnection-0x144d1c2d6dc0010 connected 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,87,0 request:: '/hbase/hbaseid,F response:: s{13,13,1395088901425,1395088901425,0,0,0,0,62,0,13} 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,87,0 request:: '/hbase/hbaseid,F response:: #ffffffff00015313231333940626967746f70312e76616772616e7462373539303362332d363166322d343432322d393935382d343162653135643061643465,s{13,13,1395088901425,1395088901425,0,0,0,0,62,0,13} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 36 byte(s) of data from znode /hbase/hbaseid; data=b75903b3-61f2-4422-9958-41be1... 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,87,0 request:: '/hbase/master,T response:: s{11,11,1395088898894,1395088898894,0,0,0,91428527212855297,63,0,11} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Set watcher on existing znode /hbase/master 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,87,0 request:: '/hbase/master,T response:: #ffffffff00015313231333940626967746f70312e76616772616e7400626967746f70312e76616772616e742c36303030302c31333935303838383936373336,s{11,11,1395088898894,1395088898894,0,0,0,91428527212855297,63,0,11} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 37 byte(s) of data from znode /hbase/master and set watcher; \x00\x00bigtop1.vagrant,60000... 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,87,0 request:: '/hbase/root-region-server,T response:: s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Set watcher on existing znode /hbase/root-region-server 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 6,4 replyHeader:: 6,87,0 request:: '/hbase/root-region-server,T response:: #ffffffff00015313137323040626967746f70312e76616772616e74626967746f70312e76616772616e742c36303032302c31333935303838383839383936,s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 35 byte(s) of data from znode /hbase/root-region-server and set watcher; bigtop1.vagrant,60020,1395088... 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 7,3 replyHeader:: 7,87,0 request:: '/hbase,F response:: s{3,3,1395088897993,1395088897993,0,12,0,0,0,12,31} 14/03/17 21:53:35 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x144d1c2d6dc0010, packet:: clientPath:null serverPath:null finished:false header:: 8,4 replyHeader:: 8,87,0 request:: '/hbase/root-region-server,T response:: #ffffffff00015313137323040626967746f70312e76616772616e74626967746f70312e76616772616e742c36303032302c31333935303838383839383936,s{31,31,1395088910567,1395088910567,0,0,0,0,61,0,31} 14/03/17 21:53:35 DEBUG zookeeper.ZKUtil: hconnection-0x144d1c2d6dc0010 Retrieved 35 byte(s) of data from znode /hbase/root-region-server and set watcher; bigtop1.vagrant,60020,1395088... 14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1e04d31e; serverName=bigtop1.vagrant,60020,1395088889896 14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is bigtop1.vagrant:60020 14/03/17 21:53:35 DEBUG client.MetaScanner: Scanning .META. starting at row=t2,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1e04d31e 14/03/17 21:53:35 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for t2,,1395092997797.5fa8b01d30eb2bc84fa91a624a2606eb. is 10.10.10.13:60020 14/03/17 21:53:35 DEBUG client.ClientScanner: Creating scanner over t2 starting at key '' 14/03/17 21:53:35 DEBUG client.ClientScanner: Advancing internal scanner to startKey at '' row2 column=f2:a, timestamp=1395092999169, value=val2 14/03/17 21:53:35 DEBUG client.ClientScanner: Finished with scanning at {NAME => 't2,,1395092997797.5fa8b01d30eb2bc84fa91a624a2606eb.', STARTKEY => '', ENDKEY => '', ENCODED => 5fa8b01d30eb2bc84fa91a624a2606eb,} 1 row(s) in 1.7930 seconds [vagrant@bigtop1 ~]$
        Hide
        jay vyas added a comment -

        One other comment : what about bigtop3/4 ? Are they supposed to be created? Im still okay with this patch (because it accomplishes the goal of a distributed cluster), but i think some clarity on why there are 4 vms instead of 2 is needed - because i see there are some defined VMs that arent created:

        [bigtop3] VM not created. Moving on...
        [bigtop2] Forcing shutdown of VM...
        ....
        
        Show
        jay vyas added a comment - One other comment : what about bigtop3/4 ? Are they supposed to be created? Im still okay with this patch (because it accomplishes the goal of a distributed cluster), but i think some clarity on why there are 4 vms instead of 2 is needed - because i see there are some defined VMs that arent created: [bigtop3] VM not created. Moving on... [bigtop2] Forcing shutdown of VM... ....
        Konstantin Boudnik made changes -
        Assignee jay vyas [ jayunit100 ]
        Konstantin Boudnik made changes -
        Affects Version/s 0.7.0 [ 12324362 ]
        Konstantin Boudnik made changes -
        Fix Version/s 0.8.0 [ 12324841 ]
        Hide
        jay vyas added a comment -

        [~bruno mahe] FYI ive tested this and IMO its (functionally) ready for commit . Maybe you can give a final review of the cleanup required and then we can push it through?

        Show
        jay vyas added a comment - [~bruno mahe] FYI ive tested this and IMO its (functionally) ready for commit . Maybe you can give a final review of the cleanup required and then we can push it through?
        Hide
        Evans Ye added a comment -

        jay vyas, thanks for such proactive responding.

        To reply on your comments:

        First, about the single node cluster deployment, actually the design is to use a wrapper shell script, startup.sh, to provision a signle node cluster or a 3 node cluster depends on which argument you choose.
        In such way I think we can give user clear understanding about what features we provided.

        So a test run for single node deployment should be like below:

        0) install the plugin as per your README
        1) ./startup.sh -s
        2) ./hbase-test.sh
        

        Second, about the whitespace, thanks for pointing it out. I'll fix it and upload a new one

        Last one, about bigtop3/4.
        For a 3 node cluster deployment there should be exactly 3 nodes created, and as I know there's no bigtop4 configured in Vagrantfile.
        If you only got 2 VM created, it must because of something goes wrong while provisioning and lead the progress stopped.
        As I know there is one possible reason to break the provision which is getting timeout while installing packages via yum.

        In conclusion, would you might to try it out again and post me any error log you have?
        I'm glad to fix it and thank you for give it a try, jay vyas.

        Show
        Evans Ye added a comment - jay vyas , thanks for such proactive responding. To reply on your comments: First, about the single node cluster deployment, actually the design is to use a wrapper shell script, startup.sh , to provision a signle node cluster or a 3 node cluster depends on which argument you choose. In such way I think we can give user clear understanding about what features we provided. So a test run for single node deployment should be like below: 0) install the plugin as per your README 1) ./startup.sh -s 2) ./hbase-test.sh Second, about the whitespace, thanks for pointing it out. I'll fix it and upload a new one Last one, about bigtop3/4. For a 3 node cluster deployment there should be exactly 3 nodes created, and as I know there's no bigtop4 configured in Vagrantfile. If you only got 2 VM created, it must because of something goes wrong while provisioning and lead the progress stopped. As I know there is one possible reason to break the provision which is getting timeout while installing packages via yum. In conclusion, would you might to try it out again and post me any error log you have? I'm glad to fix it and thank you for give it a try, jay vyas .
        Konstantin Boudnik made changes -
        Description Now that https://issues.apache.org/jira/browse/BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
             
           # If "cluster" then add a bunch of slaves also
           if cluster == true
                    # Slave nodes ....
                    config.vm.define :bigtop2 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        Now that BIGTOP-1171 is nearing completion (puppetization of vagrant), we can move towards setting up a real hadoop "cluster".

        The goal here will be to allow us to do more with vagrant than just spinning up a single VM. To scale the vagrantfile, we can do something like :

        {noformat}
         if ARGV[1] == 'cluster'
                cluster = true
            else
                cluster = false
            end
        {noformat}

        I think the Vagrantfile can look something like this:

        {noformat}

        Vagrant.configure(VAGRANT_API_VERSION) do |config|

            # Head node
            config.vm.define :bigtop1 do |bigtop1|
             
           # If "cluster" then add a bunch of slaves also
           if cluster == true
                    # Slave nodes ....
                    config.vm.define :bigtop2 do |bigtop2|
                    config.vm.define :bigtop3 do |bigtop3|
                    config.vm.define :bigtop4 do |bigtop4|

        {noformat}
        Hide
        Konstantin Boudnik added a comment -

        I think I should resort to someone's else expertise in Vagrant, which would be more extensive than mine - likely Bruno Mahé. While I can read this code and it looks ok to me, I won't be possible to spot issues just by reading this. Hence, can't really make an official review.

        Show
        Konstantin Boudnik added a comment - I think I should resort to someone's else expertise in Vagrant, which would be more extensive than mine - likely Bruno Mahé . While I can read this code and it looks ok to me, I won't be possible to spot issues just by reading this. Hence, can't really make an official review.
        Evans Ye made changes -
        Attachment BIGTOP-1178.3.patch [ 12635827 ]
        Hide
        Evans Ye added a comment -

        Attached patch 3 to fix whitespace warning.

        Show
        Evans Ye added a comment - Attached patch 3 to fix whitespace warning.
        Hide
        jay vyas added a comment - - edited

        Hi cos. one thought on this:

        You probably know more about Vagrant than you think It is simply a tool that provisions a running hadoop system for you.

        if you want, someone can breifly update BIGTOP-1240, which aims to make the review easier to do my making all standards explicit, with a section for "VM and cloud provisioners". I will transfer all those comments into a wiki page soon enough.

        Then me and evans can get started evaluating the vagrant specific aspects of the code in context of your generic "provisioning" requirements.

        Show
        jay vyas added a comment - - edited Hi cos. one thought on this: You probably know more about Vagrant than you think It is simply a tool that provisions a running hadoop system for you. if you want, someone can breifly update BIGTOP-1240 , which aims to make the review easier to do my making all standards explicit, with a section for "VM and cloud provisioners". I will transfer all those comments into a wiki page soon enough. Then me and evans can get started evaluating the vagrant specific aspects of the code in context of your generic "provisioning" requirements.
        Hide
        Konstantin Boudnik added a comment -

        As I said - I have some idea about Vagrant, but I am not fluent So, reading it is laborious.

        I like the idea of the wiki / BIGTOP-1240 update - more relevant information one has at the fingertips the better!

        Show
        Konstantin Boudnik added a comment - As I said - I have some idea about Vagrant, but I am not fluent So, reading it is laborious. I like the idea of the wiki / BIGTOP-1240 update - more relevant information one has at the fingertips the better!
        Hide
        Konstantin Boudnik added a comment -

        I think it makes sense. +1
        I will commit it in a minute

        Show
        Konstantin Boudnik added a comment - I think it makes sense. +1 I will commit it in a minute
        Konstantin Boudnik made changes -
        Assignee jay vyas [ jayunit100 ] Evans Ye [ evans_ye ]
        Hide
        Konstantin Boudnik added a comment -

        Committed to the master. Thanks Evans!

        Show
        Konstantin Boudnik added a comment - Committed to the master. Thanks Evans!
        Hide
        jay vyas added a comment -

        Thanks cos and evans.

        We can now use this to test BIGTOP-1235

        Show
        jay vyas added a comment - Thanks cos and evans. We can now use this to test BIGTOP-1235
        Evans Ye made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]

          People

          • Assignee:
            Evans Ye
            Reporter:
            jay vyas
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development