Uploaded image for project: 'Bigtop'
  1. Bigtop
  2. BIGTOP-1417

Dockerize the puppetized vagrant deployer

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.7.0
    • Fix Version/s: 1.0.0
    • Component/s: deployment
    • Labels:
      None

      Description

      This is one of the bigtop's dockerize task which mainly focus on deploying bigtop hadoop cluster using bigtop-puppet on top of docker containers.

      1. BIGTOP-1417.1.patch
        10 kB
        Evans Ye
      2. BIGTOP-1417.2.patch
        10 kB
        Evans Ye
      3. BIGTOP-1417.3.patch
        11 kB
        Evans Ye
      4. BIGTOP-1417.4.patch
        11 kB
        Evans Ye
      5. BIGTOP-1417.5.patch
        14 kB
        Evans Ye
      6. BIGTOP-1417.6.patch
        14 kB
        Evans Ye
      7. BIGTOP-1417.pdf
        147 kB
        Evans Ye

        Issue Links

          Activity

          Hide
          evans_ye Evans Ye added a comment -

          I've already developed a draft using in-side my company, will going to carve it as bigtop's version.

          Show
          evans_ye Evans Ye added a comment - I've already developed a draft using in-side my company, will going to carve it as bigtop's version.
          Hide
          jayunit100 jay vyas added a comment -

          Cool! Thanks evans I'll test it when you are ready.

          Show
          jayunit100 jay vyas added a comment - Cool! Thanks evans I'll test it when you are ready.
          Hide
          rvs Roman Shaposhnik added a comment -

          Indeed! That would be hugely appreciated!

          Show
          rvs Roman Shaposhnik added a comment - Indeed! That would be hugely appreciated!
          Hide
          evans_ye Evans Ye added a comment - - edited

          Hi folks!
          Finally, the dockerize patch is ready now.
          I've placed a README.md with-in the patch so if you're willing to try, take a look at it first.

          Here're some notes need to be mentioned:

          • For OS X, as my understanding, a shortcut is that you can install boot2docker and get docker client ready on OS X terminal, in that case you can just skip the step 1 and do the reset on your host directly(Hopefully that works cause I don't have such environment can do the test).
          • If you already have vagrant 1.6+ installed and is trying to install plugins, make sure you have vagrant upgraded to 1.6.4, otherwise you'll be suffered by this issue just like me.

          Please try the patch out and give me some feedback for any improvement, thanks.

          Show
          evans_ye Evans Ye added a comment - - edited Hi folks! Finally, the dockerize patch is ready now. I've placed a README.md with-in the patch so if you're willing to try, take a look at it first. Here're some notes need to be mentioned: For OS X, as my understanding, a shortcut is that you can install boot2docker and get docker client ready on OS X terminal, in that case you can just skip the step 1 and do the reset on your host directly(Hopefully that works cause I don't have such environment can do the test). If you already have vagrant 1.6+ installed and is trying to install plugins, make sure you have vagrant upgraded to 1.6.4, otherwise you'll be suffered by this issue just like me. Please try the patch out and give me some feedback for any improvement, thanks.
          Hide
          evans_ye Evans Ye added a comment -

          Add license section in Dockerfile in 1417.2.

          Show
          evans_ye Evans Ye added a comment - Add license section in Dockerfile in 1417.2.
          Hide
          jayunit100 jay vyas added a comment -

          hey evans awesome ill review it shortly

          Show
          jayunit100 jay vyas added a comment - hey evans awesome ill review it shortly
          Hide
          jayunit100 jay vyas added a comment - - edited

          Hi evans.

          • I see you have a vagrantfile and using docker as the provider , if thats the case, why not have vagrant also do the docker build as well using ? just curious, no big deal.
          • I had to run everything as sudo. that could be a docker mistake im maknig. Is that supposed to be a requirement? otherwise it cant see /var/docker/run
          • It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after
          • Also upgraded my vagrant to 1.6.4

          Right now, it seems to hang. rebooted, now getting some kind of port conflict?

          Will hack more on it today to see whats up

          Show
          jayunit100 jay vyas added a comment - - edited Hi evans. I see you have a vagrantfile and using docker as the provider , if thats the case, why not have vagrant also do the docker build as well using ? just curious, no big deal. I had to run everything as sudo. that could be a docker mistake im maknig. Is that supposed to be a requirement? otherwise it cant see /var/docker/run It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after Also upgraded my vagrant to 1.6.4 Right now, it seems to hang. rebooted, now getting some kind of port conflict? Will hack more on it today to see whats up
          Hide
          evans_ye Evans Ye added a comment -

          Hi jay vyas

          Thanks for trying, it looks like the patch doesn't general enough.
          Maybe we can refine the patch better by resolve issues you encountered.

          I see you have a vagrantfile and using docker as the provider , if thats the case, why not have vagrant also do the docker build as well using ? just curious, no big deal.

          You're right! but I got following error message by doing so...

          Cachier plugin only supported with docker provider when image is used
          

          And since the images building is a one time effort, so I separate it out. Once we have cachier plugin supported, maybe we can make it easier as you suggested.

          I had to run everything as sudo. that could be a docker mistake im maknig. Is that supposed to be a requirement? otherwise it cant see /var/docker/run

          Ok, would you mind to describe your environment and what command require you to run as sudo?
          I test it on a centos 6.5 VM running everything as root, so it could be a missing puzzle I did not take care of it.

          It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after

          Sounds like the docker build is blocked? I'd like to figure out what happened. Could you describe me the scenario like where you got blocked?

          Right now, it seems to hang. rebooted, now getting some kind of port conflict?

          I'm guessing that might due to the vagrant provisioning isn't going well so containers are broken and still running there. If the provisioning failed, sometimes it require to manually destroy containers by using the docker command.

          Anyhow, thank you so much for the feedback. I'll test it on a non-root scenario first and see what's going on.

          Show
          evans_ye Evans Ye added a comment - Hi jay vyas Thanks for trying, it looks like the patch doesn't general enough. Maybe we can refine the patch better by resolve issues you encountered. I see you have a vagrantfile and using docker as the provider , if thats the case, why not have vagrant also do the docker build as well using ? just curious, no big deal. You're right! but I got following error message by doing so... Cachier plugin only supported with docker provider when image is used And since the images building is a one time effort, so I separate it out. Once we have cachier plugin supported, maybe we can make it easier as you suggested. I had to run everything as sudo. that could be a docker mistake im maknig. Is that supposed to be a requirement? otherwise it cant see /var/docker/run Ok, would you mind to describe your environment and what command require you to run as sudo? I test it on a centos 6.5 VM running everything as root, so it could be a missing puzzle I did not take care of it. It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after Sounds like the docker build is blocked? I'd like to figure out what happened. Could you describe me the scenario like where you got blocked? Right now, it seems to hang. rebooted, now getting some kind of port conflict? I'm guessing that might due to the vagrant provisioning isn't going well so containers are broken and still running there. If the provisioning failed, sometimes it require to manually destroy containers by using the docker command. Anyhow, thank you so much for the feedback. I'll test it on a non-root scenario first and see what's going on.
          Hide
          jayunit100 jay vyas added a comment -

          Okay great. Im using fedora 21. Maybe it could be my docker setup... Ill look more into it. could even be the docker version. but this is a great start .
          will let you know what else i find

          Show
          jayunit100 jay vyas added a comment - Okay great. Im using fedora 21. Maybe it could be my docker setup... Ill look more into it. could even be the docker version. but this is a great start . will let you know what else i find
          Hide
          gkesavan Giridharan Kesavan added a comment -

          Evans Ye
          I tried the steps on my mac: looks like there is some issue with the port forwarding, or the port is already in use.
          I didn't spend time on how to change the port, do you have pointers at what I should be looking at?

          Here is the output

          [xxxxx@xx11151 ~/work/bigtop/bigtop-deploy/vm/docker-puppet (master)]$ vagrant up --no-provision && vagrant provision
          Bringing machine 'bigtop1' up with 'docker' provider...
          ==> bigtop1: Docker host is required. One will be created if necessary...
              bigtop1: Vagrant will now create or start a local VM to act as the Docker
              bigtop1: host. You'll see the output of the `vagrant up` for this VM below.
              bigtop1:
              bigtop1: Checking if box 'mitchellh/boot2docker' is up to date...
          Vagrant cannot forward the specified ports on this VM, since they
          would collide with some other application that is already listening
          on these ports. The forwarded port to 2375 is already in use
          on the host machine.
          
          To fix this, modify your current projects Vagrantfile to use another
          port. Example, where '1234' would be replaced by a unique host port:
          
            config.vm.network :forwarded_port, guest: 2375, host: 1234
          
          Sometimes, Vagrant will attempt to auto-correct this for you. In this
          case, Vagrant was unable to. This is usually because the guest machine
          is in a state which doesn't allow modifying port forwarding.
          
          
          Show
          gkesavan Giridharan Kesavan added a comment - Evans Ye I tried the steps on my mac: looks like there is some issue with the port forwarding, or the port is already in use. I didn't spend time on how to change the port, do you have pointers at what I should be looking at? Here is the output [xxxxx@xx11151 ~/work/bigtop/bigtop-deploy/vm/docker-puppet (master)]$ vagrant up --no-provision && vagrant provision Bringing machine 'bigtop1' up with 'docker' provider... ==> bigtop1: Docker host is required. One will be created if necessary... bigtop1: Vagrant will now create or start a local VM to act as the Docker bigtop1: host. You'll see the output of the `vagrant up` for this VM below. bigtop1: bigtop1: Checking if box 'mitchellh/boot2docker' is up to date... Vagrant cannot forward the specified ports on this VM, since they would collide with some other application that is already listening on these ports. The forwarded port to 2375 is already in use on the host machine. To fix this, modify your current projects Vagrantfile to use another port. Example, where '1234' would be replaced by a unique host port: config.vm.network :forwarded_port, guest: 2375, host: 1234 Sometimes, Vagrant will attempt to auto-correct this for you. In this case, Vagrant was unable to. This is usually because the guest machine is in a state which doesn't allow modifying port forwarding.
          Hide
          evans_ye Evans Ye added a comment -

          Hi Giridharan Kesavan
          It looks like the port 2375 on host isn't available, maybe you can try this to override default ssh port forwarding config

           config.vm.network :forwarded_port, guest: 22, host: 2376, id: "ssh"
          

          Though I don't have any idea what's going wrong, I'll take a look on using boot2docker.

          Show
          evans_ye Evans Ye added a comment - Hi Giridharan Kesavan It looks like the port 2375 on host isn't available, maybe you can try this to override default ssh port forwarding config config.vm.network :forwarded_port, guest: 22, host: 2376, id: "ssh" Though I don't have any idea what's going wrong, I'll take a look on using boot2docker.
          Hide
          evans_ye Evans Ye added a comment - - edited

          Hi jay vyas,
          I've tested this with docker version 1.1.2 on puppetlabs/centos-6.5-64-nocm and chef/fedora-20 boxes downloaded from vagrant cloud, and found that I need to run almost everything as sudo without root.
          So you're right,
          this is actually a requirement to use docker, you can either run everything as root or grant rights to users to use docker.
          The docker installation guide gives the detail but I still need to update README just as you suggested.
          In my tests both centos-6.5 and fedora-20 box work well on provisioning a hadoop cluster, so it would be appreciated to have some logs to better describe your situation

          And to Giridharan Kesavan,
          I've spent times to work on boot2docker trying to spin up containers directly on my host machine(windows) and I found that there're some critical issues:

          • Plugin compatibility: vagrant-cachier and vagrant-hostmanager currently are not supported for this certain use case.
          • There's a hanging issue when using vagrant + docker on OS X or windows host.

          I think those issues are more related to vagrant and plugins which we can't do much about,
          so I suggest to set this ticket's scope focusing on the core function that is to provision hadoop cluster via puppet on top of docker containers.
          In that case, this patch is limited to work on a Linux VM or physical machine.
          We can add OS X and windows support in separate tickets after this core function is ready.
          Sorry to bring you to this end and thanks for give it a try.

          Show
          evans_ye Evans Ye added a comment - - edited Hi jay vyas , I've tested this with docker version 1.1.2 on puppetlabs/centos-6.5-64-nocm and chef/fedora-20 boxes downloaded from vagrant cloud , and found that I need to run almost everything as sudo without root. So you're right, this is actually a requirement to use docker, you can either run everything as root or grant rights to users to use docker. The docker installation guide gives the detail but I still need to update README just as you suggested. In my tests both centos-6.5 and fedora-20 box work well on provisioning a hadoop cluster, so it would be appreciated to have some logs to better describe your situation And to Giridharan Kesavan , I've spent times to work on boot2docker trying to spin up containers directly on my host machine(windows) and I found that there're some critical issues: Plugin compatibility: vagrant-cachier and vagrant-hostmanager currently are not supported for this certain use case. There's a hanging issue when using vagrant + docker on OS X or windows host. I think those issues are more related to vagrant and plugins which we can't do much about, so I suggest to set this ticket's scope focusing on the core function that is to provision hadoop cluster via puppet on top of docker containers. In that case, this patch is limited to work on a Linux VM or physical machine. We can add OS X and windows support in separate tickets after this core function is ready. Sorry to bring you to this end and thanks for give it a try.
          Hide
          jayunit100 jay vyas added a comment -
          • I have run docker recipes without root. I think thats what the privileged parameter is for ,right?
          • I will do another round of testing on this shortly now that i understand how to use it better.

          Thanks again ! looking forward to getting the first iteration in.

          Ill update shortly on how the testing goes.

          Show
          jayunit100 jay vyas added a comment - I have run docker recipes without root. I think thats what the privileged parameter is for ,right? I will do another round of testing on this shortly now that i understand how to use it better. Thanks again ! looking forward to getting the first iteration in. Ill update shortly on how the testing goes.
          Hide
          evans_ye Evans Ye added a comment -

          Hi jay vyas,
          I think that's not related to privileged parameter but more related to docker daemon. I can't create container without root

          [vagrant@localhost docker-puppet]$ docker run -ti bigtop/seed:centos-6.4 /bin/bash
          2014/09/09 15:30:15 Post http:///var/run/docker.sock/v1.13/containers/create: dial unix /var/run/docker.sock: permission denied
          

          Unless I add the user to docker group

          [vagrant@localhost ~]$ groups
          vagrant wheel docker
          [vagrant@localhost ~]$ docker run -ti --rm bigtop/seed:centos-6.4 /bin/bash
          bash-4.1#
          

          The next iteration of docker itself will be improved which allows to run containers without root privileges.

          And for the patch, I've uploaded version 3 which fix a bug that is can not ssh in to containers except bigtop1, for example, you can't do vagrant ssh bigtop2.
          Also I've update the README with sudo added and limit the usage on linux host in this current version.

          Show
          evans_ye Evans Ye added a comment - Hi jay vyas , I think that's not related to privileged parameter but more related to docker daemon. I can't create container without root [vagrant@localhost docker-puppet]$ docker run -ti bigtop/seed:centos-6.4 /bin/bash 2014/09/09 15:30:15 Post http: /// var /run/docker.sock/v1.13/containers/create: dial unix / var /run/docker.sock: permission denied Unless I add the user to docker group [vagrant@localhost ~]$ groups vagrant wheel docker [vagrant@localhost ~]$ docker run -ti --rm bigtop/seed:centos-6.4 /bin/bash bash-4.1# The next iteration of docker itself will be improved which allows to run containers without root privileges. And for the patch, I've uploaded version 3 which fix a bug that is can not ssh in to containers except bigtop1, for example, you can't do vagrant ssh bigtop2 . Also I've update the README with sudo added and limit the usage on linux host in this current version.
          Hide
          rvs Roman Shaposhnik added a comment -

          Evans Ye this looks really useful to me.

          Question: can any Bigtop committer with a strong Vagrant foo (I'm looking in a general direction of jay vyas ) review and commit?

          Show
          rvs Roman Shaposhnik added a comment - Evans Ye this looks really useful to me. Question: can any Bigtop committer with a strong Vagrant foo (I'm looking in a general direction of jay vyas ) review and commit?
          Hide
          jayunit100 jay vyas added a comment -

          Roman Shaposhnik thanks for the reminder, this is on my plate to do. i will finish reviewing it today /

          Show
          jayunit100 jay vyas added a comment - Roman Shaposhnik thanks for the reminder, this is on my plate to do. i will finish reviewing it today /
          Hide
          jayunit100 jay vyas added a comment -

          Finally am now reviewing it again. It looks okay to me, and I think its ready to commit. Im running this on a fresh machine right now, and am currently pulling the docker images, going to wait until the morning when my brain is fresh to commit it, but looks good to me for this first iteration.

          Show
          jayunit100 jay vyas added a comment - Finally am now reviewing it again. It looks okay to me, and I think its ready to commit. Im running this on a fresh machine right now, and am currently pulling the docker images, going to wait until the morning when my brain is fresh to commit it, but looks good to me for this first iteration.
          Hide
          jayunit100 jay vyas added a comment - - edited

          This still seems to fail for me ? It looks like its working ok, after killing iptables and killing all docker containers, it seemed to get further, but at the end, i cannot ssh into the machine using vagrant ssh after this command sequence.

          Any thoughts Evans Ye ?

          [root@rhbd docker-puppet]# vagrant up --no-provision && vagrant provision
          /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777
          /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777
          Bringing machine 'bigtop1' up with 'docker' provider...
          Bringing machine 'bigtop2' up with 'docker' provider...
          Bringing machine 'bigtop3' up with 'docker' provider...
          ==> bigtop3: Fixed port collision for 22 => 2222. Now on port 2200.
          jay debug:[]==> bigtop3: Creating the container...
              bigtop3:   Name: bigtop3
              bigtop3:  Image: bigtop/ssh:centos-6.4
              bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet
              bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrant
              bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-3/manifests
              bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-3/modules-0
          ==> bigtop1: Fixed port collision for 22 => 2222. Now on port 2201.
          jay debug:[]jay debug:[]jay debug:[]jay debug:[]==> bigtop1: Creating the container...
              bigtop1:   Name: bigtop1
              bigtop1:  Image: bigtop/ssh:centos-6.4
          ==> bigtop2: Fixed port collision for 22 => 2222. Now on port 2202.
              bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet
              bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrantjay debug:
              bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-1/manifests[]
          ==> bigtop2: Creating the container...
              bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-1/modules-0
              bigtop2:   Name: bigtop2
              bigtop2:  Image: bigtop/ssh:centos-6.4
              bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet
              bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrant
              bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-2/manifests
              bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-2/modules-0
              bigtop3:  
              bigtop3: Container created: c520fd6449d36c59
          ==> bigtop3: Starting container...
              bigtop2:  
              bigtop2: Container created: 439333a403f3f114
              bigtop1:  
              bigtop1: Container created: 97efbd2ce4b77c64
          ==> bigtop2: Starting container...
          ==> bigtop3: Waiting for machine to boot. This may take a few minutes...
          ==> bigtop1: Starting container...
          ==> bigtop2: Waiting for machine to boot. This may take a few minutes...
          ==> bigtop1: Waiting for machine to boot. This may take a few minutes...
              bigtop3: SSH address: 172.17.0.58:22
              bigtop3: SSH username: root
              bigtop3: SSH auth method: private key
              bigtop1: SSH address: 172.17.0.59:22
              bigtop1: SSH username: root
              bigtop1: SSH auth method: private key
              bigtop2: SSH address: 172.17.0.60:22
              bigtop2: SSH username: root
              bigtop2: SSH auth method: private key
          ==> bigtop2: Machine booted and ready!
          ==> bigtop2: Machine not provisioning because `--no-provision` is specified.
          ==> bigtop1: Machine booted and ready!
          ==> bigtop1: Machine not provisioning because `--no-provision` is specified.
          ==> bigtop3: Machine booted and ready!
          ==> bigtop3: Machine not provisioning because `--no-provision` is specified.
          /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777
          /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777
          ==> bigtop1: Running provisioner: shell...
          /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/buffered_io.rb:65:in `recv': Connection reset by peer - recvfrom(2) (Errno::ECONNRESET)
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/buffered_io.rb:65:in `fill'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:236:in `block in postprocess'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:232:in `each'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:232:in `postprocess'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:211:in `process'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `block in loop'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `loop'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `loop'
          	from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/channel.rb:269:in `wait'
          	from /opt/vagrant/embedded/gems/gems/net-scp-1.1.2/lib/net/scp.rb:279:in `upload!'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:244:in `block in upload'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:588:in `block in scp_connect'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:407:in `connect'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:586:in `scp_connect'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:238:in `upload'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:57:in `block (2 levels) in provision_ssh'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:51:in `tap'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:51:in `block in provision_ssh'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:178:in `with_script_file'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:49:in `provision_ssh'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:21:in `provision'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:127:in `run_provisioner'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:386:in `hook'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:115:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:115:in `block in call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:103:in `each'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:103:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:196:in `action_raw'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:173:in `block in action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:474:in `lock'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:161:in `call'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:161:in `action'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/commands/provision/command.rb:30:in `block in execute'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:226:in `block in with_target_vms'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:220:in `each'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:220:in `with_target_vms'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/commands/provision/command.rb:29:in `execute'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/cli.rb:42:in `execute'
          	from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:292:in `cli'
          	from /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/bin/vagrant:174:in `<main>'
          [root@rhbd docker-puppet]# vagrant ssh
          /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777
          /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777
          This command requires a specific VM name to target in a multi-VM environment.
          [root@rhbd docker-puppet]# vagrant ssh bigtop1
          /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777
          /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777
          
          ls
          
          Write failed: Broken pipe
          [root@rhbd docker-puppet]# 
          [root@rhbd docker-puppet]# ls
          Dockerfile  provision.sh  README.md  vagrant_1.6.5_x86_64.rpm  Vagrantfile
          [root@rhbd docker-puppet]# 
          [root@rhbd docker-puppet]# vagrant ssh bigtop1
          /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777
          /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777
          
          Write failed: Broken pipe
          
          Show
          jayunit100 jay vyas added a comment - - edited This still seems to fail for me ? It looks like its working ok, after killing iptables and killing all docker containers, it seemed to get further, but at the end, i cannot ssh into the machine using vagrant ssh after this command sequence. Any thoughts Evans Ye ? [root@rhbd docker-puppet]# vagrant up --no-provision && vagrant provision /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777 /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777 Bringing machine 'bigtop1' up with 'docker' provider... Bringing machine 'bigtop2' up with 'docker' provider... Bringing machine 'bigtop3' up with 'docker' provider... ==> bigtop3: Fixed port collision for 22 => 2222. Now on port 2200. jay debug:[]==> bigtop3: Creating the container... bigtop3: Name: bigtop3 bigtop3: Image: bigtop/ssh:centos-6.4 bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrant bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-3/manifests bigtop3: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-3/modules-0 ==> bigtop1: Fixed port collision for 22 => 2222. Now on port 2201. jay debug:[]jay debug:[]jay debug:[]jay debug:[]==> bigtop1: Creating the container... bigtop1: Name: bigtop1 bigtop1: Image: bigtop/ssh:centos-6.4 ==> bigtop2: Fixed port collision for 22 => 2222. Now on port 2202. bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrantjay debug: bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-1/manifests[] ==> bigtop2: Creating the container... bigtop1: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-1/modules-0 bigtop2: Name: bigtop2 bigtop2: Image: bigtop/ssh:centos-6.4 bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet:/bigtop-puppet bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/vm/docker-puppet:/vagrant bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/manifests:/tmp/vagrant-puppet-2/manifests bigtop2: Volume: /home/jay/Development/bigtop/bigtop-deploy/puppet/modules:/tmp/vagrant-puppet-2/modules-0 bigtop3: bigtop3: Container created: c520fd6449d36c59 ==> bigtop3: Starting container... bigtop2: bigtop2: Container created: 439333a403f3f114 bigtop1: bigtop1: Container created: 97efbd2ce4b77c64 ==> bigtop2: Starting container... ==> bigtop3: Waiting for machine to boot. This may take a few minutes... ==> bigtop1: Starting container... ==> bigtop2: Waiting for machine to boot. This may take a few minutes... ==> bigtop1: Waiting for machine to boot. This may take a few minutes... bigtop3: SSH address: 172.17.0.58:22 bigtop3: SSH username: root bigtop3: SSH auth method: private key bigtop1: SSH address: 172.17.0.59:22 bigtop1: SSH username: root bigtop1: SSH auth method: private key bigtop2: SSH address: 172.17.0.60:22 bigtop2: SSH username: root bigtop2: SSH auth method: private key ==> bigtop2: Machine booted and ready! ==> bigtop2: Machine not provisioning because `--no-provision` is specified. ==> bigtop1: Machine booted and ready! ==> bigtop1: Machine not provisioning because `--no-provision` is specified. ==> bigtop3: Machine booted and ready! ==> bigtop3: Machine not provisioning because `--no-provision` is specified. /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777 /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777 ==> bigtop1: Running provisioner: shell... /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/buffered_io.rb:65:in `recv': Connection reset by peer - recvfrom(2) (Errno::ECONNRESET) from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/buffered_io.rb:65:in `fill' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:236:in `block in postprocess' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:232:in `each' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:232:in `postprocess' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:211:in `process' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `block in loop' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `loop' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/session.rb:169:in `loop' from /opt/vagrant/embedded/gems/gems/net-ssh-2.9.1/lib/net/ssh/connection/channel.rb:269:in `wait' from /opt/vagrant/embedded/gems/gems/net-scp-1.1.2/lib/net/scp.rb:279:in `upload!' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:244:in `block in upload' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:588:in `block in scp_connect' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:407:in `connect' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:586:in `scp_connect' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/communicators/ssh/communicator.rb:238:in `upload' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:57:in `block (2 levels) in provision_ssh' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:51:in `tap' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:51:in `block in provision_ssh' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:178:in `with_script_file' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:49:in `provision_ssh' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/provisioners/shell/provisioner.rb:21:in `provision' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:127:in `run_provisioner' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:386:in `hook' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:115:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:115:in `block in call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:103:in `each' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/provision.rb:103:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:95:in `block in finalize_action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/call.rb:53:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builtin/config_validate.rb:25:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/warden.rb:34:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/builder.rb:116:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `block in run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/util/busy.rb:19:in `busy' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/action/runner.rb:66:in `run' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:196:in `action_raw' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:173:in `block in action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:474:in `lock' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:161:in `call' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/machine.rb:161:in `action' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/commands/provision/command.rb:30:in `block in execute' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:226:in `block in with_target_vms' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:220:in `each' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/plugin/v2/command.rb:220:in `with_target_vms' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/plugins/commands/provision/command.rb:29:in `execute' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/cli.rb:42:in `execute' from /opt/vagrant/embedded/gems/gems/vagrant-1.6.5/lib/vagrant/environment.rb:292:in `cli' from /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/bin/vagrant:174:in `<main>' [root@rhbd docker-puppet]# vagrant ssh /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777 /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777 This command requires a specific VM name to target in a multi-VM environment. [root@rhbd docker-puppet]# vagrant ssh bigtop1 /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777 /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777 ls Write failed: Broken pipe [root@rhbd docker-puppet]# [root@rhbd docker-puppet]# ls Dockerfile provision.sh README.md vagrant_1.6.5_x86_64.rpm Vagrantfile [root@rhbd docker-puppet]# [root@rhbd docker-puppet]# vagrant ssh bigtop1 /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.6.5/lib/vagrant/pre-rubygems.rb:31: warning: Insecure world writable dir /opt in PATH, mode 040777 /opt/vagrant/embedded/gems/gems/bundler-1.6.6/lib/bundler/runtime.rb:222: warning: Insecure world writable dir /opt in PATH, mode 040777 Write failed: Broken pipe
          Hide
          evans_ye Evans Ye added a comment -

          Hi jay vyas, thanks for helping on the test.
          I haven't seen this kind of error before but for sure will run into a deep analysis on it right now, hope I can figure it out soon.

          after killing iptables and killing all docker containers, it seemed to get further

          About docker containers, if previous vagrant operation did not success, containers might still created and need to be manually cleaned up using docker command(docker rm).
          But for iptables I do not have clue on it yet, sorry.

          Thanks for the feedback, I'll first checkout the new version of vagrant, will get back to you soon.

          Show
          evans_ye Evans Ye added a comment - Hi jay vyas , thanks for helping on the test. I haven't seen this kind of error before but for sure will run into a deep analysis on it right now, hope I can figure it out soon. after killing iptables and killing all docker containers, it seemed to get further About docker containers, if previous vagrant operation did not success, containers might still created and need to be manually cleaned up using docker command(docker rm). But for iptables I do not have clue on it yet, sorry. Thanks for the feedback, I'll first checkout the new version of vagrant, will get back to you soon.
          Hide
          evans_ye Evans Ye added a comment - - edited

          Hi jay vyas, sorry I can't reproduce the problem.
          I guess I can share you my steps so that you can point me out where I can fix in the patch or where I need to add more documentation.

          Here's what I've done to prepare a Linux environment on my windows machine.

          vagrant box add box-cutter/fedora20
          vagrant init box-cutter/fedora20
          vagrant up
          vagrant ssh
          

          in-side the VM, I do:

          sudo su - root
          yum -y install docker-io git
          systemctl start docker
          yum install https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.rpm
          vagrant plugin install vagrant-hostmanager
          vagrant plugin install vagrant-cachier
          git clone https://github.com/apache/bigtop.git
          cd bigtop
          wget https://issues.apache.org/jira/secure/attachment/12667627/BIGTOP-1417.3.patch
          git am --signoff <BIGTOP-1417.3.patch
          cd bigtop-deploy/vm/docker-puppet
          

          In order to show the following process more clearly, I've record a asciinema video.
          Please take a look on the asciinema record for first 5 minutes.

          And this is just my guess:

          It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after

          If you modified this, the built image might not support ssh in sub-sequence vagrant usage.
          So if this is the case, the blocked issue should be take care first.

          Finally, if the solution is not intuition enough, I'm ok to cancel it and take some advice to deliver a new one.
          Big thanks for your time, jay vyas.

          Show
          evans_ye Evans Ye added a comment - - edited Hi jay vyas , sorry I can't reproduce the problem. I guess I can share you my steps so that you can point me out where I can fix in the patch or where I need to add more documentation. Here's what I've done to prepare a Linux environment on my windows machine. vagrant box add box-cutter/fedora20 vagrant init box-cutter/fedora20 vagrant up vagrant ssh in-side the VM, I do: sudo su - root yum -y install docker-io git systemctl start docker yum install https: //dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.rpm vagrant plugin install vagrant-hostmanager vagrant plugin install vagrant-cachier git clone https: //github.com/apache/bigtop.git cd bigtop wget https: //issues.apache.org/jira/secure/attachment/12667627/BIGTOP-1417.3.patch git am --signoff <BIGTOP-1417.3.patch cd bigtop-deploy/vm/docker-puppet In order to show the following process more clearly, I've record a asciinema video. Please take a look on the asciinema record for first 5 minutes. And this is just my guess: It seems like it was blocked. so i went ahead and added a nohup to the RUN ssh -d in the Dockerfile and added another command after If you modified this, the built image might not support ssh in sub-sequence vagrant usage. So if this is the case, the blocked issue should be take care first. Finally, if the solution is not intuition enough, I'm ok to cancel it and take some advice to deliver a new one. Big thanks for your time, jay vyas .
          Hide
          jayunit100 jay vyas added a comment -

          awesome ! It looks like it works from your ascii video. ill take another whack at it today on another system, maybe in a vm

          Show
          jayunit100 jay vyas added a comment - awesome ! It looks like it works from your ascii video. ill take another whack at it today on another system, maybe in a vm
          Hide
          jayunit100 jay vyas added a comment -

          Hi evans. just want to update that it seems like, one time, one of the nodes did work for me - but other times it fails. your ascii video is a great demo, im still going through trying to figure out how to troubleshoot my system better.i will keep working to see if i can resolve this, meanwhile if anyone else wants to try, feel free ! I think its REALLY close, i can see it doing all the right things... just getting tripped up on a minor technicality.

          will look again in the morning

          Show
          jayunit100 jay vyas added a comment - Hi evans. just want to update that it seems like, one time, one of the nodes did work for me - but other times it fails. your ascii video is a great demo, im still going through trying to figure out how to troubleshoot my system better.i will keep working to see if i can resolve this, meanwhile if anyone else wants to try, feel free ! I think its REALLY close, i can see it doing all the right things... just getting tripped up on a minor technicality. will look again in the morning
          Hide
          jayunit100 jay vyas added a comment -

          actually, trying with 1 node also seems to fail for me. hangs on vagrant ssh. out of ideas for now, but i ill keep working on this. just let me know if any updates on your end.

          Show
          jayunit100 jay vyas added a comment - actually, trying with 1 node also seems to fail for me. hangs on vagrant ssh. out of ideas for now, but i ill keep working on this. just let me know if any updates on your end.
          Hide
          jayunit100 jay vyas added a comment - - edited

          idea: Evans Ye : Since you are testing this in a VM, why not provide that vm as an option? e can do this using the d.vagrant_vagrantfile = "../path/to/Vagrantfile" option, and we can replicate your setup exactly that way (i.e. the one in your ascii video, here you started docker in a fedora vm). Just brainstorming here - may or may not be the best way forward.

          Show
          jayunit100 jay vyas added a comment - - edited idea: Evans Ye : Since you are testing this in a VM, why not provide that vm as an option? e can do this using the d.vagrant_vagrantfile = "../path/to/Vagrantfile" option, and we can replicate your setup exactly that way (i.e. the one in your ascii video, here you started docker in a fedora vm). Just brainstorming here - may or may not be the best way forward.
          Hide
          evans_ye Evans Ye added a comment -

          Hi jay vyas
          Thanks for doing so much help in this task.
          I'm very much willing to take any suggestion and apply it to the patch.
          Per your suggestion, I think your idea is to spin up a VM as docker platform and put docker containers on top of it by doing vagrant up on OS X or windows host.
          It should be looked like this:

          |------------------------------------|
          |    bigtop1 | bigtop2 | bigtop3     |
          |------------------------------------|
          |        Fedora virtualbox VM        |  
          |------------------------------------|
          |          OS X / windows            |  <--- doing vagrant up here
          |------------------------------------|
          

          The d.vagrant_vagrantfile configuration will help to spin up a fedora VM if host machine does not support docker.
          This is a very good idea, but as my test, if we do so, there are several issues need to be solve:

          With out cachier, the feature still works but just act slower, that would be ok.
          But without hostmanager, /etc/hosts for containers can not be updated, hence hadoop can not communicate with each other correctly.
          As a result, I limited this feature currently supports Linux host only, the architecture looks like below:

          |------------------------------------|
          |    bigtop1 | bigtop2 | bigtop3     |
          |------------------------------------|
          |        Fedora virtualbox VM        |  <--- doing vagrant up here  
          |------------------------------------|
          |          OS X / windows            |
          |------------------------------------|
          

          We can still try to apply this, as it should be the one we eventually will reach in the future. But I currently just don't have a good idea to solve /etc/hosts problem without hostmanager plugin.
          I'll still work on the investigation

          Show
          evans_ye Evans Ye added a comment - Hi jay vyas Thanks for doing so much help in this task. I'm very much willing to take any suggestion and apply it to the patch. Per your suggestion, I think your idea is to spin up a VM as docker platform and put docker containers on top of it by doing vagrant up on OS X or windows host. It should be looked like this: |------------------------------------| | bigtop1 | bigtop2 | bigtop3 | |------------------------------------| | Fedora virtualbox VM | |------------------------------------| | OS X / windows | <--- doing vagrant up here |------------------------------------| The d.vagrant_vagrantfile configuration will help to spin up a fedora VM if host machine does not support docker. This is a very good idea, but as my test, if we do so, there are several issues need to be solve: vagrant known issue: 'Waiting for machine to boot' hangs vagrant cachier not supported vagrant hostmanager not supported With out cachier, the feature still works but just act slower, that would be ok. But without hostmanager, /etc/hosts for containers can not be updated, hence hadoop can not communicate with each other correctly. As a result, I limited this feature currently supports Linux host only, the architecture looks like below: |------------------------------------| | bigtop1 | bigtop2 | bigtop3 | |------------------------------------| | Fedora virtualbox VM | <--- doing vagrant up here |------------------------------------| | OS X / windows | |------------------------------------| We can still try to apply this, as it should be the one we eventually will reach in the future. But I currently just don't have a good idea to solve /etc/hosts problem without hostmanager plugin. I'll still work on the investigation
          Hide
          jayunit100 jay vyas added a comment - - edited

          Evans Ye for /etc/hosts , we can use dynamic ips, and just a provisioner to to parse the ips out at runtime. like this;

          config.vm.provision "shell", inline: "ip addr | grep 172 | cut -d' ' -f6 | cut -d'/' -f1 >> /vagrant/LOCAL_IP "
          config.vm.provision "shell", path: "../common.sh", args: args
          config.vm.provision "shell", inline: "cat nohup.out"
          

          That way , each node can the master ip from the LOCAL_IP in /vagrant/LOCAL_IP. I found this works automatically on centos 7.

          You think that easily wiill work as a replacement for hostmanager in the bigtop usecase? I think so - iirc we only really need the master ip in the puppet setup scripts .

          Show
          jayunit100 jay vyas added a comment - - edited Evans Ye for /etc/hosts , we can use dynamic ips, and just a provisioner to to parse the ips out at runtime. like this; config.vm.provision "shell", inline: "ip addr | grep 172 | cut -d' ' -f6 | cut -d'/' -f1 >> /vagrant/LOCAL_IP " config.vm.provision "shell", path: "../common.sh", args: args config.vm.provision "shell", inline: "cat nohup.out" That way , each node can the master ip from the LOCAL_IP in /vagrant/LOCAL_IP. I found this works automatically on centos 7. You think that easily wiill work as a replacement for hostmanager in the bigtop usecase? I think so - iirc we only really need the master ip in the puppet setup scripts .
          Hide
          evans_ye Evans Ye added a comment -

          Very clever, jay vyas! That is a valuable idea.
          I'll apply this to the patch soon as this definitely resolve the problem and remove the dependency on hostmanager plugin.
          But there's still one problem left that is vagrant's known issue: 'Waiting for machine to boot' hangs.
          With this not being solved, it may takes 10 minutes to bring up just 2 containers.
          Sometimes it may even reaches vagrant's timeout limit and results in a failure on provisioning.
          So my idea is to focus on supporting Linux host only at beginning.
          In that case we won't need to depend on vagrant's fix schedule nor need to apply workarounds in order to support OS X or windows host.
          However this is just my thought, It would be great to heard yours.

          Show
          evans_ye Evans Ye added a comment - Very clever, jay vyas ! That is a valuable idea. I'll apply this to the patch soon as this definitely resolve the problem and remove the dependency on hostmanager plugin. But there's still one problem left that is vagrant's known issue: 'Waiting for machine to boot' hangs . With this not being solved, it may takes 10 minutes to bring up just 2 containers. Sometimes it may even reaches vagrant's timeout limit and results in a failure on provisioning. So my idea is to focus on supporting Linux host only at beginning. In that case we won't need to depend on vagrant's fix schedule nor need to apply workarounds in order to support OS X or windows host. However this is just my thought, It would be great to heard yours.
          Hide
          jayunit100 jay vyas added a comment -

          thanks evans.. so ... how does "linux only" solve the problem? not clear to me.

          Show
          jayunit100 jay vyas added a comment - thanks evans.. so ... how does "linux only" solve the problem? not clear to me.
          Hide
          evans_ye Evans Ye added a comment - - edited

          I'm happy to explain it clearly.
          By means of "linux only", the feature can only being able to run on a Linux host.
          For a linux server. it looks like this:

          OK
          |------------------------------------|
          |    bigtop1 | bigtop2 | bigtop3     |
          |------------------------------------|
          |             Linux OS               |  <---  install docker, vagrant, plugins and doing vagrant up here.
          |------------------------------------|
          

          If you're using OS X or windows, you need to prepare a Linux VM and run this feature in-side that Linux VM.
          That would be looked like this:

          OK
          |------------------------------------|
          |    bigtop1 | bigtop2 | bigtop3     |
          |------------------------------------|
          |              Linux VM              |  <---  install docker, vagrant, plugins and doing vagrant up here.
          |------------------------------------|
          |          OS X / windows            |
          |------------------------------------|
          

          if you are doing things like this:

          NOT OK
          |------------------------------------|
          |    bigtop1 | bigtop2 | bigtop3     |
          |------------------------------------|
          |      Linux VM or boot2docker       |  
          |------------------------------------|
          |          OS X / windows            | <---  install docker, vagrant, plugins and doing vagrant up here.
          |------------------------------------|
          

          The following issues occur:

          how does "linux only" solve the problem?

          According to my test, running on a Linux host directly won't have those problems.
          That is to say, I can spin up containers and do hadoop provisioning successfully just by following the steps on a linux host or VM.

          Show
          evans_ye Evans Ye added a comment - - edited I'm happy to explain it clearly. By means of "linux only", the feature can only being able to run on a Linux host. For a linux server. it looks like this: OK |------------------------------------| | bigtop1 | bigtop2 | bigtop3 | |------------------------------------| | Linux OS | <--- install docker, vagrant, plugins and doing vagrant up here. |------------------------------------| If you're using OS X or windows, you need to prepare a Linux VM and run this feature in-side that Linux VM. That would be looked like this: OK |------------------------------------| | bigtop1 | bigtop2 | bigtop3 | |------------------------------------| | Linux VM | <--- install docker, vagrant, plugins and doing vagrant up here. |------------------------------------| | OS X / windows | |------------------------------------| if you are doing things like this: NOT OK |------------------------------------| | bigtop1 | bigtop2 | bigtop3 | |------------------------------------| | Linux VM or boot2docker | |------------------------------------| | OS X / windows | <--- install docker, vagrant, plugins and doing vagrant up here. |------------------------------------| The following issues occur: vagrant known issue: 'Waiting for machine to boot' hangs vagrant cachier not supported vagrant hostmanager not supported (can be resolved by jay vyas 's idea) how does "linux only" solve the problem? According to my test, running on a Linux host directly won't have those problems. That is to say, I can spin up containers and do hadoop provisioning successfully just by following the steps on a linux host or VM.
          Hide
          jayunit100 jay vyas added a comment - - edited

          Evans Ye looks like its more than an os x issue ... im seeing this deadly hang in fedora 21. i wasnt using a VM.

          In any case, im going to try some workarounds in the github issue thread you mentioned. ill let you know if anything works. maybe changing Communicator to return true in the ./gems/vagrant-1.6.5/plugins/providers/docker/communicator.rb class as mentioned in the comments.

          will keep updates in this thread. we will get this sorted soon , i think we are pretty close !

          Show
          jayunit100 jay vyas added a comment - - edited Evans Ye looks like its more than an os x issue ... im seeing this deadly hang in fedora 21. i wasnt using a VM. In any case, im going to try some workarounds in the github issue thread you mentioned. ill let you know if anything works. maybe changing Communicator to return true in the ./gems/vagrant-1.6.5/plugins/providers/docker/communicator.rb class as mentioned in the comments. will keep updates in this thread. we will get this sorted soon , i think we are pretty close !
          Hide
          evans_ye Evans Ye added a comment -

          Okay, jay vyas I got your point now. It won't work on fedora 21.
          I was assuming that all the Linux distro will just work, but it turns out not exactly what I was thinking
          For your reference, I've already tested on following environment:

          • Centos 6.5 physical machine
          • Centos 6.5 VM (puppetlabs/centos-6.5-64-nocm box)
          • Fedora 20 VM (chef/fedora-20 and box-cutter/fedora20 box

          Now just want to make sure where we're heading to.
          Do you think we should get OS X/windows supported in this patch?
          If yes, I'll also approaching on this way.

          Show
          evans_ye Evans Ye added a comment - Okay, jay vyas I got your point now. It won't work on fedora 21. I was assuming that all the Linux distro will just work, but it turns out not exactly what I was thinking For your reference, I've already tested on following environment: Centos 6.5 physical machine Centos 6.5 VM (puppetlabs/centos-6.5-64-nocm box) Fedora 20 VM (chef/fedora-20 and box-cutter/fedora20 box Now just want to make sure where we're heading to. Do you think we should get OS X/windows supported in this patch? If yes, I'll also approaching on this way.
          Hide
          jayunit100 jay vyas added a comment -

          Im okay with anything as a first iteration which simply leverages docker to spin up a multinode hadoop instance . In fact, we dont even have to use vagrant if its getting in the way !

          Show
          jayunit100 jay vyas added a comment - Im okay with anything as a first iteration which simply leverages docker to spin up a multinode hadoop instance . In fact, we dont even have to use vagrant if its getting in the way !
          Hide
          rvs Roman Shaposhnik added a comment -

          I was going to suggest the same. Even if the current implementation is not 100% perfect – I'd rather get it in so we can
          all experiment with it, etc. After all, this is a developer productivity feature and its not likely to affect any of the core functionality.

          jay vyas I'll leave it up to you to make the final call, tho.

          Show
          rvs Roman Shaposhnik added a comment - I was going to suggest the same. Even if the current implementation is not 100% perfect – I'd rather get it in so we can all experiment with it, etc. After all, this is a developer productivity feature and its not likely to affect any of the core functionality. jay vyas I'll leave it up to you to make the final call, tho.
          Hide
          jayunit100 jay vyas added a comment -

          One other note, ive been using docker rm `docker ps --no-trunc -aq` to kill all docker processes, if we dont use vagrant, then a script that wraps that command could replace vagrant destroy

          Show
          jayunit100 jay vyas added a comment - One other note, ive been using docker rm `docker ps --no-trunc -aq` to kill all docker processes, if we dont use vagrant, then a script that wraps that command could replace vagrant destroy
          Hide
          evans_ye Evans Ye added a comment - - edited

          Hi jay vyas
          IMHO, without Vagrant, the hadoop provisioning mechanism in docker will be a different story compared with our virtualbox based provisioning(vagrant-puppet).
          I'm taking the similar way from virtualbox provisioning in this patch because I hope eventually we can merge these two things into one Vagrantfile and just need to choose the provider by vagrant up --provider docker|virtualbox.

          To quick summarize the functionality of this patch:

          • works on some linux distro, but not everyone.
          • does not work on windows/OS X atop boot2docker due to vagrant's issue

          It seems that the first item is on the critical path, so I might take a look on fedora 21 and see how things going.
          And to update my current status, I've took your suggestion and removed the hostmanager dependency in my dev environment, the updated patch can be delivered soon.

          Show
          evans_ye Evans Ye added a comment - - edited Hi jay vyas IMHO, without Vagrant, the hadoop provisioning mechanism in docker will be a different story compared with our virtualbox based provisioning(vagrant-puppet). I'm taking the similar way from virtualbox provisioning in this patch because I hope eventually we can merge these two things into one Vagrantfile and just need to choose the provider by vagrant up --provider docker|virtualbox . To quick summarize the functionality of this patch: works on some linux distro, but not everyone. does not work on windows/OS X atop boot2docker due to vagrant's issue It seems that the first item is on the critical path, so I might take a look on fedora 21 and see how things going. And to update my current status, I've took your suggestion and removed the hostmanager dependency in my dev environment, the updated patch can be delivered soon.
          Hide
          evans_ye Evans Ye added a comment - - edited

          #4 patch uploaded.
          Some notes:

          • Remove vagrant hostmanager dependency. Now use sync folder to share /etc/hosts
          • hook on vagrant destroy command to clean up the generated hosts file

          Please advice, thanks.

          Show
          evans_ye Evans Ye added a comment - - edited #4 patch uploaded. Some notes: Remove vagrant hostmanager dependency. Now use sync folder to share /etc/hosts hook on vagrant destroy command to clean up the generated hosts file Please advice, thanks.
          Hide
          jayunit100 jay vyas added a comment -

          I'll review it tonite!!!! Thanks for your perseverance Evans Ye

          Show
          jayunit100 jay vyas added a comment - I'll review it tonite!!!! Thanks for your perseverance Evans Ye
          Hide
          jayunit100 jay vyas added a comment -

          Hi evans !
          it timed out the first time. i assume the same problem.

          • i just adjusted config.vm.boot_timeout to 20 minutes., will test it while i go get some food.
          • In the meantime: FYI im actually on fedora 20 ? is that possibly causing a problem ?
          Show
          jayunit100 jay vyas added a comment - Hi evans ! it timed out the first time. i assume the same problem. i just adjusted config.vm.boot_timeout to 20 minutes., will test it while i go get some food. In the meantime: FYI im actually on fedora 20 ? is that possibly causing a problem ?
          Hide
          jayunit100 jay vyas added a comment - - edited

          (deleting the previous comment to avoid confusion... the patches seemed the same at first glance)

          Show
          jayunit100 jay vyas added a comment - - edited (deleting the previous comment to avoid confusion... the patches seemed the same at first glance)
          Hide
          jayunit100 jay vyas added a comment - - edited

          Evans Ye hi. okay.

          • well, interesting, but irrelevant: i found a bug in apache infra it downloads an old patch if you type an incorrect link when you donload a patch, rather than error messaging.

          Now... looking at your new patch - it is distinct, but still seems to fail on my fedora 20 box.

          • it still hangs in the exact same place it used to.

          Next steps: We should go with one of the following options

          • force_host_vm , to make a consistent docker environment or
          • just drop vagrant for the time being, and write a custom python or shell script that spins up a docker bigtop cluster for us.

          Clearly, the docker on vagrant hanging issue is unresolved on some systems (fedora 20 for me)... Sounds good to you? I know you have worked hard on this patch, and id really love to help you resolve it within the next few days !

          Show
          jayunit100 jay vyas added a comment - - edited Evans Ye hi. okay. well, interesting, but irrelevant: i found a bug in apache infra it downloads an old patch if you type an incorrect link when you donload a patch, rather than error messaging. Now... looking at your new patch - it is distinct, but still seems to fail on my fedora 20 box. it still hangs in the exact same place it used to. Next steps: We should go with one of the following options force_host_vm , to make a consistent docker environment or just drop vagrant for the time being, and write a custom python or shell script that spins up a docker bigtop cluster for us. Clearly, the docker on vagrant hanging issue is unresolved on some systems (fedora 20 for me)... Sounds good to you? I know you have worked hard on this patch, and id really love to help you resolve it within the next few days !
          Hide
          evans_ye Evans Ye added a comment -

          Hi jay vyas,
          Both sounds great.
          I'm really ok to drop vagrant or add force_host_vm limitation at this moment and deliver the new patch.
          Will start to draft the patch in these ways.
          But I'll be grateful if you can provide me something like an asciinema record or a vagrant box that reproduce the hanging issue.
          You know we engineers always being curious about why something goes wrong.
          Hopefully this won't occupy you to much time
          I appreciate you consistently working with me trying to bring this in, big thanks!

          Show
          evans_ye Evans Ye added a comment - Hi jay vyas , Both sounds great. I'm really ok to drop vagrant or add force_host_vm limitation at this moment and deliver the new patch. Will start to draft the patch in these ways. But I'll be grateful if you can provide me something like an asciinema record or a vagrant box that reproduce the hanging issue. You know we engineers always being curious about why something goes wrong. Hopefully this won't occupy you to much time I appreciate you consistently working with me trying to bring this in, big thanks!
          Hide
          jayunit100 jay vyas added a comment -

          Of course - lets just do it in a google hangout. Maybe this weekend we can catch up - my google is jayunit100.apache ... Just message me and we will do a screen share some time.

          Look forward to the docker impl!

          Show
          jayunit100 jay vyas added a comment - Of course - lets just do it in a google hangout. Maybe this weekend we can catch up - my google is jayunit100.apache ... Just message me and we will do a screen share some time. Look forward to the docker impl!
          Hide
          evans_ye Evans Ye added a comment - - edited

          Hi jay vyas
          Good news!
          Finally, I just have the new patch available now.
          But I need to make it clear first that the patch does not written in pure shell script + Docker commands.
          Instead, I still use Vagrant to manage containers and add a wrapper script on top of it.
          There're some reasons for me to keep Vagrant in the patch:

          • Vagrant supports sync folder from host to container even though using boot2docker on non-Linux host, while Docker can only sync volume between container and boot2docker VM which require you to clone bigtop in boot2docker. (see BIGTOP-1417.pdf)
          • container life-cycle management are way easier (create, destroy, gathering host information...)
          • Without Vagrant provisioners(which cause the issue), we can now support any hosts including OS X and Windows
          • Without Vagrant provisioners(which cause the issue), we need to do the provisioning by executing a set of ssh commands. This is the same regardless using Docker commands or Vagrant. Thus we still have the flexibility to replace Vagrant by Docker commands.

          Now with the wrapper script, the creation of Dockerize Bigtop cluser is simpler.
          You can just do ./docker-hadoop.sh --build-image --create 3 to create cluster from scratch.

          I've tested it on Centos 6.5 bare metal and Windows.
          Please give me any kind of feedback so that we can revise it to be better. Thanks!

          Show
          evans_ye Evans Ye added a comment - - edited Hi jay vyas Good news! Finally, I just have the new patch available now. But I need to make it clear first that the patch does not written in pure shell script + Docker commands. Instead, I still use Vagrant to manage containers and add a wrapper script on top of it. There're some reasons for me to keep Vagrant in the patch: Vagrant supports sync folder from host to container even though using boot2docker on non-Linux host, while Docker can only sync volume between container and boot2docker VM which require you to clone bigtop in boot2docker. (see BIGTOP-1417.pdf ) container life-cycle management are way easier (create, destroy, gathering host information...) Without Vagrant provisioners(which cause the issue), we can now support any hosts including OS X and Windows Without Vagrant provisioners(which cause the issue), we need to do the provisioning by executing a set of ssh commands. This is the same regardless using Docker commands or Vagrant. Thus we still have the flexibility to replace Vagrant by Docker commands. Now with the wrapper script, the creation of Dockerize Bigtop cluser is simpler. You can just do ./docker-hadoop.sh --build-image --create 3 to create cluster from scratch. I've tested it on Centos 6.5 bare metal and Windows. Please give me any kind of feedback so that we can revise it to be better. Thanks!
          Hide
          jayunit100 jay vyas added a comment -

          It didnt work off the bat, but seems to get further... So i hacked two things, just to distill the error.

          • remove the port forwarding stuff
          • chmod /var/run/docker.sock to 777 just to confirm that thats not an issue. but,

          but it still fails....
          https://gist.github.com/jayunit100/a9cbb28a5131acfc94c2
          it seems that the root cause is "Write failed: Broken pipe"

          Any thoughts ?

          Show
          jayunit100 jay vyas added a comment - It didnt work off the bat, but seems to get further... So i hacked two things, just to distill the error. remove the port forwarding stuff chmod /var/run/docker.sock to 777 just to confirm that thats not an issue. but, but it still fails .... https://gist.github.com/jayunit100/a9cbb28a5131acfc94c2 it seems that the root cause is "Write failed: Broken pipe" Any thoughts ?
          Hide
          evans_ye Evans Ye added a comment -

          jay vyas and I just identified the issue is because of vagrant can not ssh into the container.
          But according to my test, I can successfully ran that patch on fedora 20 as normal user vagrant.
          (just need to add the user to docker group: usermod -G docker vagrant)
          I've also record my screen output for your reference: https://asciinema.org/a/13093

          jay vyas, can you try to run following command and let's see what's happening:

          docker rmi bigtop/ssh:centos-6.4
          vagrant up image --provider docker --debug
          

          Thanks!

          Show
          evans_ye Evans Ye added a comment - jay vyas and I just identified the issue is because of vagrant can not ssh into the container. But according to my test, I can successfully ran that patch on fedora 20 as normal user vagrant. (just need to add the user to docker group: usermod -G docker vagrant) I've also record my screen output for your reference: https://asciinema.org/a/13093 jay vyas , can you try to run following command and let's see what's happening: docker rmi bigtop/ssh:centos-6.4 vagrant up image --provider docker --debug Thanks!
          Hide
          jayunit100 jay vyas added a comment -

          sure ill test this tonite. thanks for sticking with me on this Evans Ye!

          Show
          jayunit100 jay vyas added a comment - sure ill test this tonite. thanks for sticking with me on this Evans Ye !
          Hide
          evans_ye Evans Ye added a comment -

          Just uploaded a new patch which adopts jay vyas's idea to check the ssh function on build before provisioning. This allows us to identify issues earlier as possible.

          Show
          evans_ye Evans Ye added a comment - Just uploaded a new patch which adopts jay vyas 's idea to check the ssh function on build before provisioning. This allows us to identify issues earlier as possible.
          Hide
          jayunit100 jay vyas added a comment - - edited
            • !! ** SUCCESS ** !! **

          Evans Ye Roman Shaposhnik finally some progress on this front we are on the verge of a reproducible docker container for bigtop. !!!!!!!!!

          I found that selinux-enabled is in my docker options was breaking provisioning.... you can see it in

          [root@localhost docker-puppet]# service docker status
          Redirecting to /bin/systemctl status  docker.service
          docker.service - Docker Application Container Engine
             Loaded: loaded (/usr/lib/systemd/system/docker.service; static)
             Active: active (running) since Mon 2014-10-20 21:48:29 EDT; 1min 25s ago
               Docs: http://docs.docker.com
           Main PID: 4396 (docker)
             CGroup: /system.slice/docker.service
                     └─4396 /usr/bin/docker -d -H fd:// --selinux-enabled
          

          Evans Ye - I think that is what was causing the problem.........

          Now the puppet provisioner runs just fine, after i removed that..............

          *LESSON LEARNED : If you want to ssh into your docker containers, make check if --selinux-enabled is off.... ( i think ) !*

          +1 to the patch. Shall i commit ?????????????

          Show
          jayunit100 jay vyas added a comment - - edited !! ** SUCCESS ** !! ** Evans Ye Roman Shaposhnik finally some progress on this front we are on the verge of a reproducible docker container for bigtop. !!!!!!!!! I found that selinux-enabled is in my docker options was breaking provisioning.... you can see it in [root@localhost docker-puppet]# service docker status Redirecting to /bin/systemctl status docker.service docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; static) Active: active (running) since Mon 2014-10-20 21:48:29 EDT; 1min 25s ago Docs: http://docs.docker.com Main PID: 4396 (docker) CGroup: /system.slice/docker.service └─4396 /usr/bin/docker -d -H fd:// --selinux-enabled Evans Ye - I think that is what was causing the problem......... Now the puppet provisioner runs just fine, after i removed that.............. * LESSON LEARNED : If you want to ssh into your docker containers, make check if --selinux-enabled is off.... ( i think ) ! * +1 to the patch. Shall i commit ?????????????
          Hide
          jayunit100 jay vyas added a comment -

          commited, thanks Evans Ye !!!!!!!!!!!!!! Ill update the README about the --selinux-enabled stuff shortly.

          Show
          jayunit100 jay vyas added a comment - commited, thanks Evans Ye !!!!!!!!!!!!!! Ill update the README about the --selinux-enabled stuff shortly.
          Hide
          evans_ye Evans Ye added a comment -

          OK! confirmed that's the root cause.
          I enabled selinux on my fedora 20 and got exactly same error.
          I didn't realize that because the box always have selinux disabled by default...
          So that's also a lesson learned to me.
          Thanks for identified the problem, you're awesome jay vyas!

          Show
          evans_ye Evans Ye added a comment - OK! confirmed that's the root cause. I enabled selinux on my fedora 20 and got exactly same error. I didn't realize that because the box always have selinux disabled by default... So that's also a lesson learned to me. Thanks for identified the problem, you're awesome jay vyas !

            People

            • Assignee:
              Unassigned
              Reporter:
              evans_ye Evans Ye
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development