I'm very new at Ansible. I've run following ansible PlayBook and found those errors:
---
- hosts: webservers
remote_user: linx
become: yes
become_method: sudo
tasks:
- name: install docker-py
pip: name=docker-py
- name: Build Docker image from Dockerfile
docker_image:
name: web
path: docker
state: build
- name: Running the container
docker_container:
image: web:latest
path: docker
state: running
- name: Check if container is running
shell: docker ps
Error message:
FAILED! => {"changed": false, "msg": "Error connecting: Error while
fetching server API version: ('Connection aborted.', error(2, 'No such
file or directory'))"}
And here is my folder structure:
.
├── ansible.cfg
├── docker
│ └── Dockerfile
├── hosts
├── main.retry
├── main.yml
I'm confused that docker folder is already inside my local but don't know why I encountered those error message.
I've found solution is Docker daemon is not working after Docker was installed by Ansible. It's required to add following command in my play board.
---
- hosts: webservers
remote_user: ec2-user
become: yes
become_method: sudo
tasks:
- name: install docker
yum: name=docker
**- name: Ensure service is enabled
command: service docker restart***
- name: copying file to remote
copy:
src: ./docker
dest: /home/ec2-user/docker
- name: Build Docker image from Dockerfile
docker_image:
name: web
path: /home/ec2-user/docker
state: build
- name: Running the container
docker_container:
image: web:latest
name: web
- name: Check if container is running
shell: docker ps
I have faced the same problem. I am trying to perform a docker login and get the same weird error. In my case, the ansible user does not have the necessary docker credentials. The solution, in that case, is to switch to a user with docker credentials:
- name: docker login
hosts: my_server
become: yes
become_user: docker_user
tasks:
- docker_login:
registry: myregistry.com
username: myusername
password: mysecret
Related
This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping
I have set up Drone with the Docker plugin. It is building just fine, but fails to push to a private Dockerhub repo.
I have confirmed that dockerhub_username and dockerhub_password are environment variables.
kind: pipeline
type: exec
name: default
steps:
- name: docker
image: plugins/docker
settings:
repo: jbc22/myrepo
username:
from_secret: dockerhub_username
password:
from_secret: dockerhub_password
publish:
image: jbc22/myrepo
report: jbc22/myrepo
Drone returns with:
denied: requested access to the resource is denied
time="2019-09-03T19:34:32Z" level=fatal msg="exit status 1"
I would expect to see the image pushed to Dockerhub.
Just fixed the same issue... Code down below works for me!
name: default
kind: pipeline
steps:
- name: backend
image: python:3.7
commands:
- pip3 install -r req.txt
- python manage.py test
- name: publish
image: plugins/docker
settings:
username: dockerhub_username
password: dockerhub_password
repo: user/repo_name
I have a Kubernetes v1.8.6 cluster on google cloud platform.
my desktop is a macbook pro with high sierra, kubectl installed using the google-cloud-sdk an docker is installed as a vm using homebrew.
I installed php docker image using the following kubernetes deployment yaml file:
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: php-deployment
labels:
app: php
spec:
replicas: 1
selector:
matchLabels:
app: php
template:
metadata:
labels:
app: php
spec:
containers:
- name: php
image: php:7.1.13-apache-jessie
volumeMounts:
- mountPath: /var/www/html
name: httpd-storage
- mountPath: /etc/apache2
name: httpd-conf-storage
- mountPath: /usr/local/etc/php
name: php-storage
ports:
- containerPort: 443
- containerPort: 80
volumes:
- name: httpd-storage
gcePersistentDisk:
fsType: ext4
pdName: httpd-disk
- name: httpd-conf-storage
gcePersistentDisk:
fsType: ext4
pdName: httpd-conf-disk
- name: php-storage
gcePersistentDisk:
fsType: ext4
pdName: php-disk
I installed it with kubectl create -f yaml.file
it works.. so far so good.
now I want to extend this image to install CertBot on it.
so I created the following Dockerfile:
FROM php:7.1.13-apache-jessie
RUN bash -c 'echo deb http://ftp.debian.org/debian jessie-backports main >> /etc/apt/sources.list'
RUN apt-get update
RUN apt-get install -y python-certbot-apache -t jessie-backports
I placed this file in directory called build and built an image out of the dockerfile using docker build -t tuxin-php ./build.
I have no idea where the docker image is placed because docker is running out of a VM in high sierra and I'm a bit confused if I have local access or need to do scp, but it may not needed.
is there a way to directly install the Dockerfile that I created?
do I have to create the image and somehow to install it? and if so how ?
I'm a bit confused so any information regarding the issue would be greatly appreciated.
thank you
First of all, you need to build your docker image. Then you need to push your image into a docker registry. Why? So that your pod can pull that image from that registry. Building image is not enough.
Now where should you keep that docker image. You can try https://hub.docker.com/.
You can follow these steps:
Create a account in https://hub.docker.com/
Configure your machine to use your docker registry account.
Use these command
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: <your docker hub username>
Password: <your docker hub password>
Now you are ready to push your docker image into your registry.
In this case you want to push your image named tuxin-php. So you need to create a repository in docker hub (https://hub.docker.com/) using same name tuxin-php. (https://docs.docker.com/docker-hub/repos/)
Try this now
$ docker build -t xxxx/tuxin-php ./build
$ docker push xxxx/tuxin-php
Here, xxxx is your docker username.
When you are pushing xxxx/tuxin-php, your image will be stored in tuxin-php repository under your username.
And finally, you have to use this image.
containers:
- name: php
image: xxxx/tuxin-php
Your pod will pull xxxx/tuxin-php from docker hub.
Hope this will help
I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.
I have a problem when I launch a Ansible role for to install Docker in a CentOS 7 VM.
When the docker-login task runs I have the following error:
"msg": "Docker API Error: client is newer than server (client API version: 1.24, server API version: 1.22)"
And this is the Ansible role:
- name: Install python setup tools
yum: name=python-setuptools
tags: docker
- name: Install Pypi
easy_install: name=pip
tags: docker
- name: Install docker-py
pip: name=docker-py
tags: docker
- name: Install Docker
yum: name=docker state=latest
tags: docker
- name: Make sure Docker is running
service: name=docker state=running
tags: docker
- include: setup.yml
- name: login to private Docker remote registry and force reauthentification
docker_login:
registry: "{{ item.insecure_registry }}"
username: "{{ item.registry_user }}"
password: "{{ item.registry_password }}"
reauth: yes
with_items:
- "{{private_docker_registry}}"
when: private_docker_registry is defined
This installs docker 1.10.3 version with API version 1.22.
Add the api_version argument to the docker-login module:
- name: login to private Docker remote registry and force reauthentification
docker_login:
registry: "{{ item.insecure_registry }}"
username: "{{ item.registry_user }}"
password: "{{ item.registry_password }}"
reauth: yes
api_version: 1.22
with_items:
- "{{private_docker_registry}}"
when: private_docker_registry is defined