Behaviour of Ansible hosts: setting on Vagrant or Docker - docker

I provision a Vagrant box with Ansible, and my ansible/site.yml contains the following hosts entry:
---
- hosts: all
I decided to setup a CI to test the Ansible code under ansible/. But with Docker, Ansible complains:
PLAY [all] ***************************************************************
skipping: no hosts matched
Then I changed the hosts entry to localhost, and now it works in Docker! But now it refuses to run under Vagrant!
PLAY [localhost] ***************************************************************
skipping: no hosts matched
I am not using Vagrant and Docker together! Vagrant is used in my machine and Docker in the CI, but both run the same Ansible playbook!
TL;DR: Vagrant only works with hosts: all, and Docker only works with hosts: localhost.

It seems you start ansible-playbook with an empty inventory in your CI environment.
Add -i 'local,' -c local parameters to define inventory with one host local and connection mode set to local.
Your command line should look like:
ansible-playbook -i 'local,' -c local playbook.yml
In this case hosts: all will work fine.

Related

Run ansible playbook from docker container and deploy on host machine

I would like to run ansible playbook on my local machine using ansible from a docker container.
Here is what my Ansible Dockerfile looks like:
FROM alpine:3.6
WORKDIR /ansible
RUN apk update \
&& apk add ansible
ENTRYPOINT ["ansible-playbook"]
playbook.yml:
---
- hosts: localhost
roles:
- osx
roles/osx/tasks/main.yml
---
- name: Welcome
shell: echo "Hello"
when: ansible_distribution == 'MacOSX'
Then I run it with:
docker build -t ansible_image:latest .
docker run --rm --network host \
-v $(pwd):/ansible \
ansible_image:latest ansible/playbook.yml
My host operating system is OS X. I expect that osx role will execute,
however it seems that playbook is run on alpine container.
I would like to ask how to indicate ansible in docker to deploy stuff on my local machine?
Your playbook is targeting localhost:
---
- hosts: localhost
roles:
- osx
This means that Ansible is going to target the local machine (which is
to say, your Ansible container) when running the playbook. Ansible is
designed to apply playbooks to remote machines as well, typically by
connecting to them using ssh. Assuming that it's possible to
connect from your Ansible container to your host using ssh, you
could just create an appropriate inventory file and then target your
playbook appropriately:
---
- hosts: my_osx_host
roles:
- osx
If you're just starting out with Ansible, you might want to start with
the Getting Started document and work your way from there. You'll find documentation on that site that should walk you through the process of creating an inventory file.

Calling docker stack deploy on a docker host from within a Jenkins container

On my OS X host, I'm using Docker CE (18.06.1-ce-mac73 (26764)) with Kubernetes enabled and using Kubernetes orchestration. From this host, I can run a stack deploy to deploy a container to Kubernetes using this simple docker-compose file (kube-compose.yml):
version: '3.3'
services:
web:
image: dockerdemos/lab-web
volumes:
- "./web/static:/static"
ports:
- "9999:80"
and this command-line run from the directory containing the compose file:
docker stack deploy --compose-file ./kube-compose.yml simple_test
However, when I attempt to run the same command from my Jenkins container, Jenkins returns:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
I do not want the docker client in the Jenkins container to be initialized for a swarm since I'm not using Docker swarm on the host.
The Jenkins container is defined in a docker-compose to include a volume mount to the docker host socket endpoint:
version: '3.3'
services:
jenkins:
# contains embedded docker client & blueocean plugin
image: jenkinsci/blueocean:latest
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- ./jenkins_home:/var/jenkins_home
# run Docker from the host system when the container calls it.
- /var/run/docker.sock:/var/run/docker.sock
# root of simple project
- .:/home/project
container_name: jenkins
I have also followed this guide to proxy requests to the docker host with socat: https://github.com/docker/for-mac/issues/770 and here: Docker-compose: deploying service in multiple hosts.
Finally, I'm using the following Jenkins definition (Jenkinsfile) to call stack to deploy on my host. Jenkins has the Jenkins docker plug-in installed:
node {
checkout scm
stage ('Deploy To Kube') {
docker.withServer('tcp://docker.for.mac.localhost:1234') {
sh 'docker stack deploy app --compose-file /home/project/kube-compose.yml'
}
}
}
I've also tried changing the withServer signature to:
docker.withServer('unix:///var/run/docker.sock')
and I get the same error response. I am, however, able to telnet to the docker host from the Jenkins container so I know it's reachable. Also, as I mentioned earlier, I know the message is saying to run swarm init, but I am not deploying to swarm.
I checked the version of the docker client in the Jenkins container and it is the same version (Linux variant, however) as I'm using on my host:
Docker version 18.06.1-ce, build d72f525745
Here's the code I've described: https://github.com/ewilansky/localstackdeploy.git
Please let me know if it's possible to do what I'm hoping to do from the Jenkins container. The purpose for all of this is to provide a simple, portable demonstration of a pipeline and deploying to Kubernetes is the last step. I understand that this is not the approach that would be taken anywhere outside of a local development environment.
Here is an approach that's working well for me until the Jenkins Docker plug-in or the Kubernetes Docker Stack Deploy command can support the remote deployment scenario I described.
I'm now using the Kubernetes client kubectl from the Jenkins container. To minimize the size increase of the Jenkins container, I added just the Kubernetes client to the jenkinsci/blueocean image that was built on Alpine Linux. This DockerFile shows the addition:
FROM jenkinsci/blueocean
USER root
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
RUN mkdir /root/.kube
COPY kube-config /root/.kube/config
I took this approach, which added ~100 mb to the image size rather than getting the Alpine Linux Kubernetes package, which almost doubled the size of the image in my testing. Granted, the Kubernetes package has all Kubernetes components, but all I needed was the Kubernetes client. This is similar to the requirement that the docker client be resident to the Jenkins container in order to run Docker commands on the host.
Notice in the DockerFile that there is reference to the Kuberenetes config file:
kube-config /root/.kube/config
I started with the Kubernetes configuration file on my host machine (the computer running Docker for Mac). I believe that if you enable Kubernetes in Docker for Mac, the Kubernetes client configuration will be present at ~/.kube/config. If not, install the Kubernetes client tools separately. In the Kubernetes configuration file that you will copy over to the Jenkins container via DockerFile, just change the server value so that the Jenkins container is pointing at the Docker for Mac host:
server: https://docker.for.mac.localhost:6443
If you're using a Windows machine, I think you can use docker.for.win.localhost. There's a discussion about this here: https://github.com/docker/for-mac/issues/2705 and other approaches described here: https://github.com/docker/for-linux/issues/264.
After recomposing the Jenkins container, I was then able to use kubectl to create a deployment and service for my app that's now running in the Kubernetes Docker for Mac host. In my case, here are the two commands I added to my Jenkins file:
stage ('Deploy To Kube') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-deployment.yaml'
}
stage('Configure Kube Load Balancer') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-service.yaml'
}
There are loads of options for Kubernetes container deployments. In my case, I simply needed to deploy my web app (with replicas) behind a load balancer. All of that is defined in the two yaml files called by kubectl. This is a bit more involved than docker stack deploy, but achieves the same end result.

Ansible deploys Docker container to wrong Vagrant VM

Updated after trying out #konstantin-suvorov solution. Now it doesn't do anything.
I have created 5 Vagrant VMs, all from bento/xenial64 and have then used Ansible to deploy docker onto all of the machines.
When I then attempt to use Ansible to deploy a container onto a remote VM, it says that it has done it, but the container is running on the local machine.
My 4 machines are:
control
cluster01
cluster02
cluster03
cluster04
Docker is up and running on all 5
From VM control, I run
ansible-playbook -i hosts/local jenkins.yml
My inventory file is
[control]
10.100.100.100
[cluster]
10.100.100.101
10.100.100.102
10.100.100.103
10.100.100.104
[master]
10.100.100.101
This is my Jenkins playbook
---
- hosts: master
remote_user: ubuntu
serial: 1
roles:
- jenkins
and this is my jenkins role
---
- name: Container is running
docker_container:
name: jenkins
image: "jenkins:{{ jenkins_version }}"
ports: 8080:8080
volumes:
- "{{ jenkins_home_dir }}:/var/jenkins_home"
After running the ansible-playbook, with very very very verbose option, and adding inventory for the vagrant machines,
vagrant#control:/vagrant$ ansible-playbook -i hosts/local jenkins.yml -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -vvvv
Using /vagrant/ansible.cfg as config file
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: jenkins.yml **********************************************************
1 plays in jenkins.yml
PLAY RECAP *********************************************************************
What am I doing wrong?
Remove ansible_connection=local from remote nodes.
If ansible_connection is local, Ansible runs all tasks on local (control) host.

Weave + Ansible Docker Module

I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.

How to test Ansible playbook using Docker

I'm new to ansible (and docker). I would like to test my ansible playbook before using it on any staging/production servers.
Since I don't have access to an empty remote server, I thought the easiest way to test would be to use Docker container and then just run my playbook with the Docker container as the host.
I have a basic DockerFile that creates a standard ubuntu container. How would I configure the ansible hosts in order to run it against the docker container? Also, I suspect I would need to "run" the docker container to allow ansible to connect to it.
Running the playbook in a docker container may not actually be the best approach unless your stage and production servers are also Docker containers. The Docker ubuntu image is stripped down and will have some differences from a full installation. A better option might be to run the playbook in an Ubuntu VM that matches your staging and production installations.
That said, in order to run the ansible playbook within the container you should write a Dockerfile that runs your playbook. Here's a sample Dockerfile:
# Start with the ubuntu image
FROM ubuntu
# Update apt cache
RUN apt-get -y update
# Install ansible dependencies
RUN apt-get install -y python-yaml python-jinja2 git
# Clone ansible repo (could also add the ansible PPA and do an apt-get install instead)
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
# Set variables for ansible
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin
ENV ANSIBLE_LIBRARY /tmp/ansible/library
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
# add playbooks to the image. This might be a git repo instead
ADD playbooks/ /etc/ansible/
ADD inventory /etc/ansible/hosts
WORKDIR /etc/ansible
# Run ansible using the site.yml playbook
RUN ansible-playbook /etc/ansible/site.yml -c local
The ansible inventory file would look like
[local]
localhost
Then you can just docker build . (where . is the root of the directory where your playbooks and Dockerfile live), then docker run on the resulting image.
Michael DeHaan, the CTO of Ansible, has an informative blog post on this topic.
There's a working example regarding this: https://github.com/William-Yeh/docker-ansible
First, choose the base image you'd like to begin with from the following list:
williamyeh/ansible:debian8-onbuild
williamyeh/ansible:debian7-onbuild
williamyeh/ansible:ubuntu14.04-onbuild
williamyeh/ansible:ubuntu12.04-onbuild
williamyeh/ansible:centos7-onbuild
williamyeh/ansible:centos6-onbuild
Second, put the following Dockerfile along with your playbook directory:
FROM williamyeh/ansible:ubuntu14.04-onbuild
# ==> Specify playbook filename; default = "playbook.yml"
#ENV PLAYBOOK playbook.yml
# ==> Specify inventory filename; default = "/etc/ansible/hosts"
#ENV INVENTORY inventory.ini
# ==> Executing Ansible...
RUN ansible-playbook-wrapper
Third, docker build .
For more advanced usage, the role in Ansible Galaxy williamyeh/nginx also demonstrates how to do a simple integration test for a variety of Linux distributions on Travis CI’s Ubuntu 12.04 worker instances.
Disclosure: I am the author of the docker-ansible and wiliamyeh/nginx projects.
I've created a role for this vary scenario: https://github.com/chrismeyersfsu/provision_docker. Easily start Docker containers and use them in your role or playbook, as inventory, to test.
Includes:
Curated Dockerfile for Ubuntu 12.04 & 14.04 as well as CentOS 6 & 7 that put back in the distro-removed init systems
start ssh
Also note the examples all have a .travis.yml file to form a CI pipeline using Travis CI.
Examples:
Simple: https://github.com/chrismeyersfsu/provision_docker/tree/master/test
Simple: https://github.com/chrismeyersfsu/role-iptables/tree/master/test
Advanced: https://github.com/chrismeyersfsu/role-install_mongod/tree/master/test
Apart from provisioning localhost (the machine where you have Ansible installed), you can also tell Ansible to:
create a new docker container,
provision that container,
destroy that container.
For this to work you need such a hosts.yaml file:
all:
hosts:
mycontainer:
ansible_connection: docker
localhost:
ansible_connection: local
such a playbook.yaml file:
---
- name: Create a container to be provisioned later
hosts: localhost
tasks:
- name: create docker container
docker_container:
name: mycontainer
image: python:2.7.16-slim-stretch
command: ["sleep", "1d"]
- name: Provision the container created above
hosts: mycontainer
roles:
- simple
and another playbook file: destroy.yaml used to destroy the container:
---
- name: Destroy a container
hosts: localhost
tasks:
- name: destroy docker container
docker_container:
name: mycontainer
state: absent
Create also a simple role: roles/simple/taksks/main.yaml
---
- name: Create a file
copy:
content: "hi!!"
dest: /tmp/hello
force: yes
mode: 0555
And now to create a container and provision it, run:
ansible-playbook -i ./hosts.yaml ./playbook.yml
Verify that container was provisioned (the file was created):
docker exec mycontainer cat /tmp/hello
To destroy the container run:
ansible-playbook -i ./hosts.yaml ./destroy.yml
There are of course disadvantages:
the container must have python installed
some Ansible modules might not work, because additional python packages have to be installed. E.g. if you wanted to deploy docker containers (in the docker container), you have to install docker python SDK (pip3 install docker)
I was inspired by this blog post: https://medium.com/#andreilhicas/provision-docker-containers-with-ansible-30cc5ee6d950

Resources