Updated after trying out #konstantin-suvorov solution. Now it doesn't do anything.
I have created 5 Vagrant VMs, all from bento/xenial64 and have then used Ansible to deploy docker onto all of the machines.
When I then attempt to use Ansible to deploy a container onto a remote VM, it says that it has done it, but the container is running on the local machine.
My 4 machines are:
control
cluster01
cluster02
cluster03
cluster04
Docker is up and running on all 5
From VM control, I run
ansible-playbook -i hosts/local jenkins.yml
My inventory file is
[control]
10.100.100.100
[cluster]
10.100.100.101
10.100.100.102
10.100.100.103
10.100.100.104
[master]
10.100.100.101
This is my Jenkins playbook
---
- hosts: master
remote_user: ubuntu
serial: 1
roles:
- jenkins
and this is my jenkins role
---
- name: Container is running
docker_container:
name: jenkins
image: "jenkins:{{ jenkins_version }}"
ports: 8080:8080
volumes:
- "{{ jenkins_home_dir }}:/var/jenkins_home"
After running the ansible-playbook, with very very very verbose option, and adding inventory for the vagrant machines,
vagrant#control:/vagrant$ ansible-playbook -i hosts/local jenkins.yml -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -vvvv
Using /vagrant/ansible.cfg as config file
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: jenkins.yml **********************************************************
1 plays in jenkins.yml
PLAY RECAP *********************************************************************
What am I doing wrong?
Remove ansible_connection=local from remote nodes.
If ansible_connection is local, Ansible runs all tasks on local (control) host.
Related
I would like to run ansible playbook on my local machine using ansible from a docker container.
Here is what my Ansible Dockerfile looks like:
FROM alpine:3.6
WORKDIR /ansible
RUN apk update \
&& apk add ansible
ENTRYPOINT ["ansible-playbook"]
playbook.yml:
---
- hosts: localhost
roles:
- osx
roles/osx/tasks/main.yml
---
- name: Welcome
shell: echo "Hello"
when: ansible_distribution == 'MacOSX'
Then I run it with:
docker build -t ansible_image:latest .
docker run --rm --network host \
-v $(pwd):/ansible \
ansible_image:latest ansible/playbook.yml
My host operating system is OS X. I expect that osx role will execute,
however it seems that playbook is run on alpine container.
I would like to ask how to indicate ansible in docker to deploy stuff on my local machine?
Your playbook is targeting localhost:
---
- hosts: localhost
roles:
- osx
This means that Ansible is going to target the local machine (which is
to say, your Ansible container) when running the playbook. Ansible is
designed to apply playbooks to remote machines as well, typically by
connecting to them using ssh. Assuming that it's possible to
connect from your Ansible container to your host using ssh, you
could just create an appropriate inventory file and then target your
playbook appropriately:
---
- hosts: my_osx_host
roles:
- osx
If you're just starting out with Ansible, you might want to start with
the Getting Started document and work your way from there. You'll find documentation on that site that should walk you through the process of creating an inventory file.
I am working on ansible script for start docker damon , docker container, docker exec After start docker container with in the docker container i need to start some services.
I have installed docker engine , configured and working with some docker container in remote machines. i have used to start docker daemon with specific path, because i need to store my volumes and containers with in path.
$docker daemon -g /test/docker
My issue is when start the docker daemon its started, but not go to next process. via ansible. still running docker daemon.
---
- hosts: webservers
remote_user: root
# Apache Subversion dnf -y install python-pip
tasks:
- name: Start Docker Deamon
shell: docker -d -g /test/docker
become: yes
become_user: root
- name: Start testing docker machine
command: docker start testing
async: True
poll: 0
I follow async to start the process ,but its not working for me ,
Suggest me After start docker daemon, How to run next process.
In order to start the docker daemon you should use the ansible service module:
- name: Ensure docker deamon is running
service:
name: docker
state: started
become: true
any docker daemon customisation should be placed in /etc/docker/daemon.json as described in official documentation. in your case the file would look like:
{
"graph": "/test/docker"
}
In order to interact with containers, use the ansible docker_container module:
- name: Ensure My docker container is running
docker_container:
name: testing
image: busybox
state: started
become: true
Try to avoid doing anything in ansible using the shell module, since it can cause headaches down the line.
You can also start Docker and other services automatically when booting the machine. For that you can use the systemd module in Ansible like this:
- name: Enable docker.service
systemd:
name: docker.service
daemon_reload: true
enabled: true
- name: Enable containerd.service
systemd:
name: containerd.service
daemon_reload: true
enabled: true
Reference: here
I provision a Vagrant box with Ansible, and my ansible/site.yml contains the following hosts entry:
---
- hosts: all
I decided to setup a CI to test the Ansible code under ansible/. But with Docker, Ansible complains:
PLAY [all] ***************************************************************
skipping: no hosts matched
Then I changed the hosts entry to localhost, and now it works in Docker! But now it refuses to run under Vagrant!
PLAY [localhost] ***************************************************************
skipping: no hosts matched
I am not using Vagrant and Docker together! Vagrant is used in my machine and Docker in the CI, but both run the same Ansible playbook!
TL;DR: Vagrant only works with hosts: all, and Docker only works with hosts: localhost.
It seems you start ansible-playbook with an empty inventory in your CI environment.
Add -i 'local,' -c local parameters to define inventory with one host local and connection mode set to local.
Your command line should look like:
ansible-playbook -i 'local,' -c local playbook.yml
In this case hosts: all will work fine.
The issue I am experiencing with Wercker is that the specific linked services in my wercker.yml are not being linked to my main docker container.
I noticed this issue when my node app was not running on port 3001 after a successful Wercker deploy in which it's output can be seen in the image below.
Therefore I SSH'd into my server and into my docker container that was running after the Wercker deploy using:
docker exec -i -t <my-container-name> ./bin/bash
and found the following MongoDB error in my PM2 logs:
[MongoError: connect EHOSTUNREACH 172.17.0.7:27017
The strange fact is that from the images below you can see that both the environment variables that I need from each respective service have been set:
Does anyone know why the services containers cannot be accessed from my main container even thought their environment variables have been set?
The folloing is the wercker.yml file that I am using.
box: node
services:
- id: mongo
- id: redis
build:
steps:
- npm-install
deploy:
steps:
- npm-install
- script:
name: install pm2
code: npm install pm2 -g
- internal/docker-push:
username: $DOCKER_USERNAME
password: $DOCKER_PASSWORD
repository: /
ports: "3001"
cmd: /bin/bash -c "cd /pipeline/source && pm2 start processes_prod.json --no-daemon"
env: "MONGO_PORT_27017_TCP_ADDR"=$MONGO_PORT_27017_TCP_ADDR,"REDIS_PORT_6379_TCP_ADDR"=$REDIS_PORT_6379_TCP_ADDR
- add-ssh-key:
keyname: DIGITAL_OCEAN_KEY
- add-to-known_hosts:
hostname:
- script:
name: pull latest image
code: ssh root# docker pull /:latest
- script:
name: stop running container
code: ssh root# docker stop || echo ‘failed to stop running container’
- script:
name: remove stopped container
code: ssh root# docker rm || echo ‘failed to remove stopped container’
- script:
name: remove image behind stopped container
code: ssh root# docker rmi /:current || echo ‘failed to remove image behind stopped container’
- script:
name: tag newly pulled image
code: ssh root# docker tag /:latest /:current
- script:
name: run new container
code: ssh root# docker run -d -p 8080:3001 --name /:current
- script:
name: env
code: env
AFAIK the Wercker services are available only in the build process, and not the deploy one. Mongo and Redis are persisted data stores - meaning they are not supposed to be reinstalled every time you deploy.
So make sure you manually setup Redis and Mongo in your deploy environment.
I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.