Ansible can't import docker even though it's installed - docker

I'm trying to build a server that runs a docker container using ansible, but I'm getting the error Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on ubuntu-xenial's Python /usr/bin/python3.
The target machine is ubuntu xenial, currently in vagrant but I get the same error on an Azure VM. My version of ansible is 2.9.15, the docker version ansible is installing is 20.10.1.
My test inventory file consists of
all:
vars:
ansible_user: vagrant
ansible_become: yes
ansible_python_interpreter: /usr/bin/python3
children:
a_servers:
hosts:
192.168.33.100
b_servers:
hosts:
192.168.33.100
prod:
hosts:
192.168.33.100
The relevant part of my playbook is:
- name: Docker role
include_role:
name: nickjj.docker
vars:
docker__users: ['deploy']
docker_registries:
- #registry_url: "https://index.docker.io/v1/"
username: "{{ docker_user.username }}"
password: "{{ docker_user.password }}"
- name: Log in to docker
docker_login:
username: "{{ docker_user.username }}"
password: "{{ docker_user.password }}"
become: yes
become_user: deploy
When I log in to the vagrant box as user deploy, I can run docker login and docker-compose up -d without any problems. I can also run /usr/bin/python3 and import docker without issue.
There is no other python on the system.
The error comes from the docker_login resource, which can't seem to import docker.
Is a configuration I'm missing, or something else I've overlooked that would cause it to fail? Any help is much appreciated.

Related

Restart mutiple Docker Containers using Ansible not happening

I am trying to restart my docker containers one by one for a particular image using Ansible but it doesn't seem to be happening. Below is my yml and what it is doing is exiting the current running container.
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
docker_container:
name: "{{ item }}"
# image: ubuntu
state: started
restart: yes
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"
If you see the below output when i run docker ps there are no running containers.
TASK [Restart Docker Service] ****************************************************************************************************************
/usr/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.25.9) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
changed: [shashank-VM] => (item=c2310b76b005)
PLAY RECAP ***********************************************************************************************************************************
shashank-VM : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shashank#shashank-VM:~/ansible$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am i doing wrong? Can someone help?
I don't think the docker_container module is designed to do what you want (i.e., restart an existing container). The module is designed to manage containers by name, not by id, and will check that the running container matches the options provided to docker_container.
You're probably better off simply using the docker command to restart your containers:
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
command: docker restart {{ item }}
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"

How to force Ansible to recreate a docker container if mounted files have changed

I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?

ansible jenkins_plugins mdule : error when playing again a playbook : Jenkins Home not found

I have a problem with the jenkins_plugins module.
Within a playbook that pull a jenkins docker image (jenkins/jenkins:lts-alpine) and runs it to install the instance and configure it, I have a task that installs a list of plugins on an instance, which is :
- name: Install plugins
jenkins_plugin:
owner: "{{ jenkins_process_user }}"
group: "{{ jenkins_process_group }}"
name: "{{ item }}"
state: latest
timeout: 120
url: "http://{{ jenkins_hostname }}:{{ jenkins_http_port }}{{ jenkins_url_prefix }}"
url_username: "{{ jenkins_admin_name }}"
url_password: "{{ jenkins_admin_password }}"
with_dependencies: yes
loop: "{{ jenkinsPlugins }}"
register: pluginResult
until: not pluginResult.failed
retries: 5
notify: restart_image
become: True
become_user: "{{ jenkins_process_user }}"
It works correctly when the playbook is run for the first time.
All plugins are installed, and possibly retried in case of problem.
But, when I relaunch exactly the same playbook, Each and every plugin installation is retried up to the max nbr of retry and fails with (for example):
failed: [devEnv] (item=blueocean) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": "blueocean", "msg": "Jenkins home directory doesn't exist."}
For sure, I have verified that the jenkins home directory actually exists and has the awaited "{{ jenkins_process_user }}" and
"{{ jenkins_process_group }}" owner and group, which is jenkins:jenkins.
Note that the docker container is bound to a local directory which belongs to jenkins:jenkins. To be sure uid and gid are the same on the local machine (a VM created with vagrant) and on the container, the uid:gid are forced to 1001:1001 when starting the container.
I also have checked that it actually the case.
I really cannot explain why I get this error, which clearly makes this playbook not idempotent !
I know that there is a way to install plugins via a shell script provided by Jenkins, but I'd like to stick with ansible playbook as far as possible.
For sure, I can give the whole playbook if you need additional information.
Thanks for your help.
J-L
Ok, I understand the problem.
Reading again the jenkins_plugins documentation, and looking at the jenkins_plugins module code, I found that installation and check already installed plugin version do not run the same code (two different alternatives of a test).
And the second one needs **JENKINS_HOME** to be defined, which is optional (defaults to /var/lib/jenkins) on the module parameters. I did not set it.
Well, it is actually /var/lib/jenkins on the container, but not on the docker controler machine, which is the ansible playbook target where it is /home/jenkins/jenkins_home.
So... This question is closed, unless someone has an additional information to give. You're welcome !
Best Regards.

Ansible doesn't add credentials to ~/.docker/config.json despite succesful login to registry

I can't login to private docker login registry via ansible. The login itself is succesful, which I can see in registry container logs; besides ansible doesn't throw erorrs when doing login task. However after running ansible role, I can't pull images. Apparently ansible never adds any credentials to docker's config file. When I login manually from host machine, everything works fine.
Does anyone know what's the problem?
versions: Ansible 2.3.2.0, Docker 17.12
ansible role in main.yml:
---
# should work but does not - ansible doesn't add credentials to ~/.docker.config.json... have to log manually
- name: login to private registry
docker_login:
registry: "{{ registry_container_url }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
reauthorize: yes
playbook only executes this role:
---
- hosts: host1
become: yes
roles:
- testrole
Log:
ok: [node02]
TASK [docker-test : login to private registry] *************************************************************************
changed: [node02]
PLAY RECAP ************************************************************************************************************
node02 : ok=2 changed=1 unreachable=0 failed=0
Ok, I got it.
become has to be set to no in the playbook.
You could also add the docker config to Root's home if you need to run as privileged. IE:
/home/root/.docker/config.json

importerror:no module name docker.client

I've gotten a question when usde ansible to configurate a docker container.
Here is my ansible playbook:
---
-name: localhost
hosts: localhost
vars:
- ansible_python_interpreter: python
tasks:
- name: busybox test
docker:
image: busybox
name: test
but when I run the file with :
ansible-playbook ad.yml
I got following error:
from docker.client import APIError as DockerAPIError
ImportError: No module named docker.client
no docker.client? I have installed docker-py, but still got this question , how can I fix this ?? h....e....l....p....
I think the problem is caused by the way how I install the ansible. After I reinstall the ansible through python-pip, this problem has been solved successfully.
zypper install python-pip
pip install ansible
use this to solve the problem on suse.

Resources