Jenkins docker_login ansible playbook : Permission denied - docker

I would like to copy docker container inside docker registry with Jenkins.
When I execute Ansible playbook i get :
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
I suppose that ansible is run under user jenkins because this link, and because of the log file:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
Because the ansible playbook try to do a docker_login, I understand that user jenkins need to be able to connect to docker.
So I add jenkins to a docker users :
I don't understand why the permission is denied
The whole log jenkins file:
TASK [Log into Docker registry]
************************************************
task path: /var/jenkins_home/workspace/.../build_docker.yml:8
Using module file /usr/lib/python2.7/dist-
packages/ansible/modules/core/cloud/docker/docker_login.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" && echo ansible-tmp-1543388409.78-179785864196502="` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFASoHo TO /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/ /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py; rm -rf "/var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"cacert_path": null,
"cert_path": null,
"config_path": "~/.docker/config.json",
"debug": false,
"docker_host": null,
"email": null,
"filter_logger": false,
"key_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "https://registry.docker....si",
"ssl_version": null,
"timeout": null,
"tls": null,
"tls_hostname": null,
"tls_verify": null,
"username": "jenkins"
},
"module_name": "docker_login"
},
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
}
to retry, use: --limit #/var/jenkins_home/workspace/.../build_docker.retry
The whole ansible playbook
---
- hosts: localhost
vars:
git_branch: "{{ GIT_BRANCH|default('development') }}"
tasks:
- name: Log into Docker registry
docker_login:
registry_url: https://registry.docker.....si
username: ...
password: ....

If anyone have the same problem I found the solution,...
My registry doesn't have valid HTTPS certificate. So, you need to add
{
"insecure-registries" : [ "https://registry.docker.....si" ]
}
inside /etc/docker/daemon.json

Related

state is present but all of the following are missing: source

I have Ansible script to build docker image:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
path: /
force: yes
tags: deb
and Dockerfile:
FROM ubuntu:bionic
RUN export DEBIAN_FRONTEND=noninteractive; \
apt-get -qq update && \
apt-get -qq install \
software-properties-common git curl wget openjdk-8-jre-headless debhelper devscripts
WORKDIR /workspace
when I run next command: ansible-playbook build.yml -vvv
I received next exception.
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/devpc/.ansible/tmp/ansible-tmp-1633701673.999151-517949-133730725910177/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"archive_path": null,
"build": null,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"force": true,
"force_absent": false,
"force_source": false,
"force_tag": false,
"load_path": null,
"name": "xroad-deb-bionic",
"path": "/",
"pull": null,
"push": false,
"repository": null,
"source": null,
"ssl_version": null,
"state": "present",
"tag": "latest",
"timeout": 60,
"tls": false,
"tls_hostname": null,
"use_ssh_client": false,
"validate_certs": false
}
},
"msg": "state is present but all of the following are missing: source"
}
Could you please give me a hint how to debug and understand what this error mean?
Thanks for your time and consideration.
The error says a source: key must be present in the docker_image: block.
More specifically, most things in Ansible default to state: present. When you request a docker_image to be present, there are a couple of ways to get it (pulling it from a registry, building it from source, unpacking it from a tar file). Which way to do this is specified by the source: control, but Ansible does not have a default value for this.
If you're building an image, you need to specify source: build. Having done that, there are also a set of controls under build:. In particular, the path to the Docker image context (probably not /) goes there and not directly under docker_image:.
This leaves you with something like:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
source: build # <-- add
build: # <-- add
path: /home/user/src/something # <-- move under build:
force: yes
tags: deb

Unable to run npm command in ansible awx_task container

I have been using ansible core for some time now and expanding my team so the need for ansible awx has become a little more pressing. I have been working at it for a week now and I think it's time to shout for help.
We had a process of replacing the baseurl of angularjs apps with some variable using ansible and set some settings before we compile it (currently thinking of a different way of doing this using build server like TeamCity but not right now we we are trying to be up with ansible awx).
ansible core checks out the code from the git branch version , replaces the variables and zip it to s3 etc.
Knowing that, the ansible awx host was configured with the nvm then node was installed and the .nvm mapped to /home/awx/.nvm
I have also mapped a bashrc to /home/awx/.bashrc. When I log into the awx_task container docker exec -it awx_task /bin/bash I see the below:
[root#awx ~]# npm --version
5.5.1
[root#awx ~]# echo $PATH /home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[root#awx ~]# env
NVM_DIR=/home/awx/.nvm
LANG=en_US.UTF-8
HOSTNAME=awx
NVM_CD_FLAGS=
DO_ANSIBLE_HOME=/opt/do_ansible_awx_home
PWD=/home/awx
HOME=/home/awx
affinity:container==eb57afe832eaa32472812d0cd8b614be6df213d8e866f1d7b04dfe109a887e44
TERM=xterm
NVM_BIN=/home/awx/.nvm/versions/node/v8.9.3/bin
SHLVL=1
LANGUAGE=en_US:en
PATH=/home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
[root#awx ~]# cat /home/awx/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
All the volume mappings, etc were done with the installer role templates and tasks so the output above is the same after multiple docker restart and reinstall running the ansible awx installer playbook. But during the execution of the playbook that makes use of the npm, it seems it has a different env PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
At this point, I am not sure whether I failed to configure the path properly or other containers like awx_web should also be configured etc.
I have also noticed the env NVM_BIN and modified the npm playbook to include the path to the npm executable:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ lookup('env','NVM_BIN') }}/npm"
and it doens't even show during execution thus pointing at different path and env variables being loaded.
I will be grateful if you could shed some lights on whatever I am doing wrongly.
Thanks in advance
EDITS : After implementing #sergei suggestion I have used the extra vars npm_bin: /home/awx/.nvm/versions/node/v8.9.3/bin
I have changed the task to look like:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ npm_bin }}/npm"
But it produced this result:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" && echo
ansible-tmp-1579790680.4419668-165048670233209="` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/language/npm.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10173xtu81x_o/tmpd40htayd TO /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 114, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib64/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib64/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I have also tried to use shell module directly with the following:
- name: Running npm install
shell: "{{ npm_bin }}/npm install"
args:
chdir: "{{ bps_git_checkout_folder }}"
That has produced this instead:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" && echo
ansible-tmp-1579791187.453365-253173616238218="` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site- packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10395h1ga8fw3/tmpepeig729 TO /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"delta": "0:00:00.005528",
"end": "2020-01-23 14:53:07.928843",
"invocation": {
"module_args": {
"_raw_params": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"_uses_shell": true,
"argv": null,
"chdir": "/opt/do_ansible_awx_home/gh/deployments/sandbox/bps",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Not really seeing what's wrong here . Grateful if anybody can share some lights on this.
Where is your packages sitting? In the host or inside the container? All execution is in the task container.
If you're npm files are sitting on the 'host' and not in the container then you have to refer to the host that the containers are sitting on to to refer to the path.

Jenkins 2.0: ansiblePlaybook plugin

I would like to execute a Jenkins pipeline:
stage('Deploy watchers') {
ansiblePlaybook(
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -e target=dev-dp-manager-1.
This execution leads to:
fatal: [dev-dp-manager-1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
According to the documentation I need to add sudo: true to make the ansible command execute with root privileges. If I do so:
stage('Deploy watchers') {
ansiblePlaybook(
sudo: true,
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -s -U root -e target=dev-dp-manager-1. Nevertheless I get the same error.
If I try to say sudo ansible-playbook ... my command succeeds.
My question is whether I can achieve the desired execution by using the plugin or I have to write the ansible command by hand?
Thanks!
What worked for me was:
stage('Deploy watchers') {
sh 'sudo ansible-playbook watcher-manage.yml --extra-vars="target=dev-dp-manager-1"'
}
Since the Jenkins' linux user doesn't need have access to ssh keys, simply lifting its permissions for this one command does the job.

ansible - cisco IOS and "reload" command

I would like to send command "reload in " to Cisco IOS, but that specific command needs to be confirmed like below:
#reload in 30
Reload scheduled in 30 minutes by admin on vty0 (192.168.253.15)
Proceed with reload? [confirm]
It semms like ios_command module doesn't handle such case.
My configuration:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands: reload in 30
commands: y
provider: "{{ cli }}"
And response from playbook:
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test1.yml:14
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" && echo ansible-tmp-1476454008.17-103724241654271="` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpAJiZR2 TO /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command; rm -rf "/root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "commands": ["y"], "failed": true, "invocation": {"module_args": {"auth_pass": null, "authorize": false, "commands": ["y"], "host": "192.168.0.33", "interval": 1, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "provider": "{'username': 'admin', 'host': '192.168.0.33', 'password': '********'}", "retries": 10, "ssh_keyfile": null, "timeout": 10, "username": "admin", "waitfor": null}, "module_name": "ios_command"}, "msg": "matched error in response: y\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nsw7.test.lab#"}
How can I handle this?
updated:
If I try to use expect module in YAML file like this:
name: some tests
hosts: sw-test
gather_facts: False
# connection: local
tasks:
- name: do reload in case of "catting off"
expect:
command: reload in 30
responses:
'Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\nProceed with reload? \[confirm\]' : y
echo: yes
But there is a problem with connection:
oot#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" && echo ansible-tmp-1476882070.37-92402455055985="` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmp30wGsF TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c ssh
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" && echo ansible-tmp-1476882145.78-139203779538157="` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmpY5qqyW TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o
ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c local
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" && echo ansible-tmp-1476882426.62-172601217553809="` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpdq1pYy TO /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/ /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect && sleep 0'
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect; rm -rf "/root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"chdir": null, "command": "reload in 30", "creates": null, "echo": true, "removes": null, "responses": {"Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\\nProceed with reload? \\[confirm\\]": "y"}, "timeout": 30}, "module_name": "expect"}, "msg": "The command was not found or was not executable: reload."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
UPDATED
I've installed ansible 2.3 and tried as follows:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
But still, I get an error. I think that this is because ios module always wait for a promt as a response. And additionaly confirmation of reload command is without "Enter" after pressing "y" so this could be another problem.
$ sudo ansible-playbook test1.yml -vvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: test1.yml ************************************************************
1 plays in test1.yml
PLAY [testowe dzialania] *******************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /home/user1/test1.yml:13
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" && echo ansible-tmp-1477557527.56-157304653717324="` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmphf8EWO TO /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py && sleep 0'
<192.168.0.33> EXEC /bin/sh -c '/usr/bin/python /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py; rm -rf "/home/user1/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"reload in 30",
"y"
],
"host": "192.168.0.33",
"interval": 1,
"match": "all",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": null,
"provider": {
"host": "192.168.0.33",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "admin"
},
"retries": 10,
"ssh_keyfile": null,
"timeout": 10,
"transport": null,
"use_ssl": true,
"username": "admin",
"validate_certs": true,
"wait_for": [
"result[0] contains \"Proceed with reload\""
]
},
"module_name": "ios_command"
},
"msg": "timeout trying to send command: reload in 30\r"
}
to retry, use: --limit #/home/user1/test1.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
Can anyone have any idea how to resolve that problem in ansible or maby the only way is to use pure python script or write own ansible module?
You can use:
- name: reload device
ios_command:
commands:
- "reload in 1\ny"
provider: "{{ cli }}"
This will reload the device in 1 minute and the reload prompt gets accepted. It works well for ansible because the default prompt of ios will come back (reload gets triggered in 1 minute).
Regards,
Simon
commands parameter of ios_command module expects a YAML formated list of commands. However in the code example provided commands parameter is set multiple times. Try the ios_command task like this:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
provider: "{{ cli }}"
Ansible 2.2 only
You could use something like this:
- name: send reload command inc confirmation
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
Not tested but similar to last example for ios_command module.
Take care with Ansible 2.2 though, it's not released yet and new releases of Ansible can have significant regressions.
Ansible 2.0+ includes the expect module but that requires Python on the remote device, so it won't work on IOS or similar devices.
It appears that the simplest method would be to use the 'raw' module to send raw SSH commands to the device.
This avoids having to use expect and having to play around with the ios_command module.
The raw module will run the commands without caring what responses or prompts the device.
Below worked for me with ansible-playbook 2.9.0 and Python 3.7. Please note that, on line with - command, make sure to use double quote " instead of single one '. And don't forget to put \n at the end of command.
- name: Reloading switch using ios_command.
ios_command:
commands:
- command: "reload\n"
prompt: 'Proceed with reload? [confirm]'
answer: "\r"
I have simiar problem. Need to reload cisco device and then getting prompts:
save?
[confirm]
How to answer that correctly ?
name: Reloading in 1 min if not online
cisco.ios.ios_command:
commands:
- command: reload in 1
prompt: 'System configuration has been modified. Save? [yes/no]:'
answer: 'n'
answer: 'y'

Ansible Docker Connection Error

I am running ansible version 1.9, docker-py version 1.1.0 and Docker 1.9.1. I have a private insecured docker registry running at http://registry.myserver.com:5000.
I have an ansible task to start a container using a pulled image from this remote registry:
---
- name: Start User Service Container
docker:
name: userService
image: user-service
registry: registry.myserver.com:5000
state: running
insecure_registry: yes
expose:
- 8355
However, this is currently returning the following error:
failed: [bniapp1] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
Verbose output:
<54.229.16.155>
<54.229.16.155> image=discovery-service registry=http://registry.myserver.com:5000 name=discoveryService state=running
<54.229.16.155> IdentityFile=/home/nfrstrctrescd/bni-api.pem ConnectTimeout=10 PasswordAuthentication=no KbdInteractiveAuthentication=no User=centos ControlPath =/home/nfrstrctrescd/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=g ssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersis t=60s
<54.229.16.155>
<54.229.16.155> IdentityFile=/home/nfrstrctrescd/bni-api.pem ConnectTimeout=10 'sudo -k && sudo -H -S -p "[sudo via ansible, key=hxhptjipltjnteknbbxkqgcdwvwshen p] password: " -u root /bin/sh -c '"'"'echo SUDO-SUCCESS-hxhptjipltjnteknbbxkqgc dwvwshenp; LANG=C DOCKER_HOST=tcp://127.0.0.1:2376 DOCKER_TLS_VERIFY=1 LC_CTYPE= C DOCKER_CERT_PATH=/opt/docker/certs /usr/bin/python /home/centos/.ansible/tmp/a nsible-tmp-1460499148.45-268540710837667/docker; rm -rf /home/centos/.ansible/tm p/ansible-tmp-1460499148.45-268540710837667/ >/dev/null 2>&1'"'"'' PasswordAuthe ntication=no KbdInteractiveAuthentication=no User=centos ControlPath=/home/nfrst rctrescd/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-m ic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s
failed: [bniapp1] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
Note: When i run the container manually on the remote server, the image gets pulled and the container is started correctly:
docker run registry.myserver.com:5000/user-service
I got this error because my docker daemon was not running. Adding the following ansible code before starting docker fixed it for me:
# Start Docker Service
- name: Start Docker service
service: name=docker state=started
become: yes
become_method: sudo
- name: Boot Docker on startup
service: name=docker enabled=yes
become: yes
become_method: sudo

Resources