ansible - cisco IOS and "reload" command - ios

I would like to send command "reload in " to Cisco IOS, but that specific command needs to be confirmed like below:
#reload in 30
Reload scheduled in 30 minutes by admin on vty0 (192.168.253.15)
Proceed with reload? [confirm]
It semms like ios_command module doesn't handle such case.
My configuration:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands: reload in 30
commands: y
provider: "{{ cli }}"
And response from playbook:
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test1.yml:14
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" && echo ansible-tmp-1476454008.17-103724241654271="` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpAJiZR2 TO /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command; rm -rf "/root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "commands": ["y"], "failed": true, "invocation": {"module_args": {"auth_pass": null, "authorize": false, "commands": ["y"], "host": "192.168.0.33", "interval": 1, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "provider": "{'username': 'admin', 'host': '192.168.0.33', 'password': '********'}", "retries": 10, "ssh_keyfile": null, "timeout": 10, "username": "admin", "waitfor": null}, "module_name": "ios_command"}, "msg": "matched error in response: y\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nsw7.test.lab#"}
How can I handle this?
updated:
If I try to use expect module in YAML file like this:
name: some tests
hosts: sw-test
gather_facts: False
# connection: local
tasks:
- name: do reload in case of "catting off"
expect:
command: reload in 30
responses:
'Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\nProceed with reload? \[confirm\]' : y
echo: yes
But there is a problem with connection:
oot#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" && echo ansible-tmp-1476882070.37-92402455055985="` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmp30wGsF TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c ssh
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" && echo ansible-tmp-1476882145.78-139203779538157="` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmpY5qqyW TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o
ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c local
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" && echo ansible-tmp-1476882426.62-172601217553809="` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpdq1pYy TO /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/ /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect && sleep 0'
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect; rm -rf "/root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"chdir": null, "command": "reload in 30", "creates": null, "echo": true, "removes": null, "responses": {"Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\\nProceed with reload? \\[confirm\\]": "y"}, "timeout": 30}, "module_name": "expect"}, "msg": "The command was not found or was not executable: reload."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
UPDATED
I've installed ansible 2.3 and tried as follows:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
But still, I get an error. I think that this is because ios module always wait for a promt as a response. And additionaly confirmation of reload command is without "Enter" after pressing "y" so this could be another problem.
$ sudo ansible-playbook test1.yml -vvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: test1.yml ************************************************************
1 plays in test1.yml
PLAY [testowe dzialania] *******************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /home/user1/test1.yml:13
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" && echo ansible-tmp-1477557527.56-157304653717324="` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmphf8EWO TO /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py && sleep 0'
<192.168.0.33> EXEC /bin/sh -c '/usr/bin/python /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py; rm -rf "/home/user1/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"reload in 30",
"y"
],
"host": "192.168.0.33",
"interval": 1,
"match": "all",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": null,
"provider": {
"host": "192.168.0.33",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "admin"
},
"retries": 10,
"ssh_keyfile": null,
"timeout": 10,
"transport": null,
"use_ssl": true,
"username": "admin",
"validate_certs": true,
"wait_for": [
"result[0] contains \"Proceed with reload\""
]
},
"module_name": "ios_command"
},
"msg": "timeout trying to send command: reload in 30\r"
}
to retry, use: --limit #/home/user1/test1.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
Can anyone have any idea how to resolve that problem in ansible or maby the only way is to use pure python script or write own ansible module?

You can use:
- name: reload device
ios_command:
commands:
- "reload in 1\ny"
provider: "{{ cli }}"
This will reload the device in 1 minute and the reload prompt gets accepted. It works well for ansible because the default prompt of ios will come back (reload gets triggered in 1 minute).
Regards,
Simon

commands parameter of ios_command module expects a YAML formated list of commands. However in the code example provided commands parameter is set multiple times. Try the ios_command task like this:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
provider: "{{ cli }}"

Ansible 2.2 only
You could use something like this:
- name: send reload command inc confirmation
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
Not tested but similar to last example for ios_command module.
Take care with Ansible 2.2 though, it's not released yet and new releases of Ansible can have significant regressions.
Ansible 2.0+ includes the expect module but that requires Python on the remote device, so it won't work on IOS or similar devices.

It appears that the simplest method would be to use the 'raw' module to send raw SSH commands to the device.
This avoids having to use expect and having to play around with the ios_command module.
The raw module will run the commands without caring what responses or prompts the device.

Below worked for me with ansible-playbook 2.9.0 and Python 3.7. Please note that, on line with - command, make sure to use double quote " instead of single one '. And don't forget to put \n at the end of command.
- name: Reloading switch using ios_command.
ios_command:
commands:
- command: "reload\n"
prompt: 'Proceed with reload? [confirm]'
answer: "\r"

I have simiar problem. Need to reload cisco device and then getting prompts:
save?
[confirm]
How to answer that correctly ?
name: Reloading in 1 min if not online
cisco.ios.ios_command:
commands:
- command: reload in 1
prompt: 'System configuration has been modified. Save? [yes/no]:'
answer: 'n'
answer: 'y'

Related

Unable to run npm command in ansible awx_task container

I have been using ansible core for some time now and expanding my team so the need for ansible awx has become a little more pressing. I have been working at it for a week now and I think it's time to shout for help.
We had a process of replacing the baseurl of angularjs apps with some variable using ansible and set some settings before we compile it (currently thinking of a different way of doing this using build server like TeamCity but not right now we we are trying to be up with ansible awx).
ansible core checks out the code from the git branch version , replaces the variables and zip it to s3 etc.
Knowing that, the ansible awx host was configured with the nvm then node was installed and the .nvm mapped to /home/awx/.nvm
I have also mapped a bashrc to /home/awx/.bashrc. When I log into the awx_task container docker exec -it awx_task /bin/bash I see the below:
[root#awx ~]# npm --version
5.5.1
[root#awx ~]# echo $PATH /home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[root#awx ~]# env
NVM_DIR=/home/awx/.nvm
LANG=en_US.UTF-8
HOSTNAME=awx
NVM_CD_FLAGS=
DO_ANSIBLE_HOME=/opt/do_ansible_awx_home
PWD=/home/awx
HOME=/home/awx
affinity:container==eb57afe832eaa32472812d0cd8b614be6df213d8e866f1d7b04dfe109a887e44
TERM=xterm
NVM_BIN=/home/awx/.nvm/versions/node/v8.9.3/bin
SHLVL=1
LANGUAGE=en_US:en
PATH=/home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
[root#awx ~]# cat /home/awx/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
All the volume mappings, etc were done with the installer role templates and tasks so the output above is the same after multiple docker restart and reinstall running the ansible awx installer playbook. But during the execution of the playbook that makes use of the npm, it seems it has a different env PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
At this point, I am not sure whether I failed to configure the path properly or other containers like awx_web should also be configured etc.
I have also noticed the env NVM_BIN and modified the npm playbook to include the path to the npm executable:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ lookup('env','NVM_BIN') }}/npm"
and it doens't even show during execution thus pointing at different path and env variables being loaded.
I will be grateful if you could shed some lights on whatever I am doing wrongly.
Thanks in advance
EDITS : After implementing #sergei suggestion I have used the extra vars npm_bin: /home/awx/.nvm/versions/node/v8.9.3/bin
I have changed the task to look like:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ npm_bin }}/npm"
But it produced this result:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" && echo
ansible-tmp-1579790680.4419668-165048670233209="` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/language/npm.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10173xtu81x_o/tmpd40htayd TO /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 114, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib64/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib64/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I have also tried to use shell module directly with the following:
- name: Running npm install
shell: "{{ npm_bin }}/npm install"
args:
chdir: "{{ bps_git_checkout_folder }}"
That has produced this instead:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" && echo
ansible-tmp-1579791187.453365-253173616238218="` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site- packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10395h1ga8fw3/tmpepeig729 TO /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"delta": "0:00:00.005528",
"end": "2020-01-23 14:53:07.928843",
"invocation": {
"module_args": {
"_raw_params": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"_uses_shell": true,
"argv": null,
"chdir": "/opt/do_ansible_awx_home/gh/deployments/sandbox/bps",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Not really seeing what's wrong here . Grateful if anybody can share some lights on this.
Where is your packages sitting? In the host or inside the container? All execution is in the task container.
If you're npm files are sitting on the 'host' and not in the container then you have to refer to the host that the containers are sitting on to to refer to the path.

Not able to run ansible-playbook using becomeUser

I'm trying to run ansible-playbook from Jenkinsfile with become and becomeUser parameters but it seems Jenkins is taking its own userid "jenkins" to connect to remote host
Jenkinsfile
stage("Deployment"){
steps{
ansiColor('xterm') {
ansiblePlaybook(
playbook: 'myPlaybook.yaml',
inventory: 'myHosts.ini',
colorized: true,
become: true,
becomeUser: 'userID',
extras: '-vvv'
)
}
}
}
I also appended become and becomeUser in playbook as well
---
- name: Deploy stack from a compose file
hosts: myNodes
become: yes
become_user: userID
tasks:
- name: deploying my application
docker_stack:
state: present
Jenkins build log
TASK [Gathering Facts] *********************************************************
task path: /path/to/myPlaybook.yaml:2
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/var/lib/jenkins/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/var/lib/jenkins/.ansible/cp/5493f46899 x.x.x.x '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<x.x.x.x> (255, '', 'jenkins#x.x.x.x: Permission denied (publickey,password).\r\n')
fatal: [x.x.x.x]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: jenkins#x.x.x.x: Permission denied (publickey,password).",
"unreachable": true
}
Even jenkins run with become and becomeUser command
[xx-yy] $ ansible-playbook myplaybook.yaml -i myHosts.ini -b --become-user userID -vvv
Please advise to resolve this, Thanks.
Found the alternate solution. Observed the logs line by line:
ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/var/lib/jenkins/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/var/lib/jenkins/.ansible/cp/5493f46899 x.x.x.x '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<x.x.x.x> (255, '', 'jenkins#x.x.x.x: Permission denied (publickey,password).\r\n')
Hence added ansible_user while doing ssh to remote user in inventory file:
[myNode]
x.x.x.x ansible_user=myuserId
Happy Learning
The below link could be helpful for you in understanding about become and become_user.
Medium Blog Link here .
And this is the snippet worth sharing,
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root
$ ansible all -m ping -u bruce --sudo
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --sudo --sudo-user batman
# With latest version of ansible `sudo` is deprecated so use become
# as bruce, sudoing to root
$ ansible all -m ping -u bruce -b
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce -b --become-user batman

Jenkins docker_login ansible playbook : Permission denied

I would like to copy docker container inside docker registry with Jenkins.
When I execute Ansible playbook i get :
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
I suppose that ansible is run under user jenkins because this link, and because of the log file:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
Because the ansible playbook try to do a docker_login, I understand that user jenkins need to be able to connect to docker.
So I add jenkins to a docker users :
I don't understand why the permission is denied
The whole log jenkins file:
TASK [Log into Docker registry]
************************************************
task path: /var/jenkins_home/workspace/.../build_docker.yml:8
Using module file /usr/lib/python2.7/dist-
packages/ansible/modules/core/cloud/docker/docker_login.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" && echo ansible-tmp-1543388409.78-179785864196502="` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFASoHo TO /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/ /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py; rm -rf "/var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"cacert_path": null,
"cert_path": null,
"config_path": "~/.docker/config.json",
"debug": false,
"docker_host": null,
"email": null,
"filter_logger": false,
"key_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "https://registry.docker....si",
"ssl_version": null,
"timeout": null,
"tls": null,
"tls_hostname": null,
"tls_verify": null,
"username": "jenkins"
},
"module_name": "docker_login"
},
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
}
to retry, use: --limit #/var/jenkins_home/workspace/.../build_docker.retry
The whole ansible playbook
---
- hosts: localhost
vars:
git_branch: "{{ GIT_BRANCH|default('development') }}"
tasks:
- name: Log into Docker registry
docker_login:
registry_url: https://registry.docker.....si
username: ...
password: ....
If anyone have the same problem I found the solution,...
My registry doesn't have valid HTTPS certificate. So, you need to add
{
"insecure-registries" : [ "https://registry.docker.....si" ]
}
inside /etc/docker/daemon.json

IBM Cloud Private Docker logged in as root rather than ubuntu

When I run the docker command on the ICP tutorial:
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
I receive a error that I am logged in as root instead of the ubuntu user. What may be causing this and how can it be fixed?
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [10.2.7.26]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
[WARNING]: sftp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.2.7.26]: FAILED! => {"changed": false, "module_stderr": "Connection to 10.2.7.26 closed.\r\n", "module_stdout": "Please login as the user \"ubuntu\" rather than the user \"root\".\r\n\r\n", "msg": "MODULE FAILURE", "rc": 0}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.2.7.26 : ok=1 changed=1 unreachable=0 failed=1
Edit:
The error from the verbose message:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 10.2.7.26 'dd of=Please login as the user "ubuntu" rather than the user "root"./setup.py bs=65536'
<10.2.7.26> (0, 'Please login as the user "ubuntu" rather than the user "root".\n\n', '')
However, this error occurs when I use my private key generated from my cloud provider. When I follow the SSH key generator here: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/ssh_keys.html
I get this error:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 -tt 10.2.7.26 'ls /usr/bin/python &>/dev/null || (echo "Can'"'"'t find Python interpreter(/usr/bin/python) on your node" && exit 1)'
<10.2.7.26> (255, '', 'Permission denied (publickey).\r\n')
fatal: [10.2.7.26]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied >(publickey).\r\n",
"unreachable": true
}
The hosts:
[master]
10.2.7.26
[worker]
10.2.7.26
[proxy]
10.2.7.26
The Config.yaml:
network_type: calico
kubelet_extra_args: ["--fail-swap-on=false"]
cluster_domain: cluster.local
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0",
"--snapshot-count=10000"]
default_admin_user: admin
default_admin_password: admin
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
image-security-enforcement:
clusterImagePolicy:
- name: "docker.io/ibmcom/*"
policy:
For ICP installation, it requires root user permission. Could you try to install ICP by below command?
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
More information, you can access below link for details.
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/install_containers_CE.html

Using Ansible to Install Jenkins Plugins tells me the crumb value is invalid

TL;DR
Jenkins is telling me I am passing an invalid crumb value when installing plugins from an Ansible script
Details
I have Jenkins 2.32.2 running in a Docker container, using the official Docker container.
I have installed it to a Vagrant VM, and am attempting to configure the plugins using Ansible.
I am iterating through a list of plugins using the following task
- name: Install plugins
include: install_plugin.yml
with_items: "{{ plugins }}"
loop_control:
loop_var: plugin_name
tags: [jenkins]
with the following list defined in the defaults/main.yml file
plugins:
- git
- template-project
- pipeline
- docker-workflow
- template-project
- config-file-provider
- bitbucket
- disk-usage
- greenballs
- jacoco
- slack
- sonar
Below is the definition of install_plugin.yml which is called from the main.yml file
---
- name: Get Jenkins crumb
uri:
user: admin
password: "{{ jenkins_admin_password }}"
force_basic_auth: yes
url: "http://{{ ansible_hostname }}:8080/crumbIssuer/api/json"
return_content: yes
register: crumb_token
until: crumb_token.content.find('Please wait while Jenkins is getting ready') == -1
retries: 10
delay: 5
tags: [jenkins]
- name: Plugins are installed
uri:
url: "http://{{ ansible_host }}:8080/pluginManager/installNecessaryPlugins"
method: POST
user: admin
password: "{{ jenkins_admin_password }}"
body: '<jenkins><install plugin="{{ plugin_name }}#latest" /></jenkins>'
headers:
Content-Type: "text/xml"
Jenkins-Crumb: "{{ crumb_token.json.crumb }}"
creates: "{{ jenkins_home }}/plugins/{{ plugin_name }}"
register: plugins_result
tags: [jenkins]
- wait_for:
path: "{{ jenkins_home }}/plugins/{{ plugin_name }}"
tags: [jenkins]
When I attempt to emulate this using curl from the command line, I get the expected results using the following 2 commands, it works successfully
~/Projects/ci> curl --user admin:admin cluster01:8080/crumbIssuer/api/json
{"_class":"hudson.security.csrf.DefaultCrumbIssuer","crumb":"646966a811fe84bdc5dc00a0de942b80","crumbRequestField":"Jenkins-Crumb"}%
~/Projects/ci> curl -X POST --user admin:admin -d '<jenkins><install plugin="git#latest" /></jenkins>' --header 'Jenkins-Crumb: 646966a811fe84bdc5dc00a0de942b80' --header 'Content-Type: text/xml' http://cluster01:8080/pluginManager/installNecessaryPlugins
But when I run the Ansible playbook, I get the following error
status code was not [200]: HTTP Error 403: No valid crumb was included in the request
Here is the log output from -vvvv for this step
TASK [jenkins : Install plugins] ***********************************************
task path: /Users/chris/Projects/ci/roles/jenkins/tasks/main.yml:56
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
included: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml for cluster01
TASK [jenkins : Get Jenkins crumb] *********************************************
task path: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml:2
Using module file /usr/local/Cellar/ansible/2.2.0.0_2/libexec/lib/python2.7/site-packages/ansible/modules/core/network/basics/uri.py
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r cluster01 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555 `" && echo ansible-tmp-1486148016.3-32614286575555="` echo $HOME/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555 `" ) && sleep 0'"'"''
<cluster01> PUT /var/folders/g5/h48p994d3qn7d9_nz7xv2lvh0000gn/T/tmprL_Pye TO /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555/uri.py
<cluster01> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r '[cluster01]'
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r cluster01 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555/ /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555/uri.py && sleep 0'"'"''
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r -tt cluster01 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-vzqwuvcglpsfrrzkvwdcupjtukijcwvl; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555/uri.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1486148016.3-32614286575555/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [cluster01] => {
"attempts": 1,
"changed": false,
"connection": "close",
"content": "{\"_class\":\"hudson.security.csrf.DefaultCrumbIssuer\",\"crumb\":\"ad67abc734af7eae279df5c68098a29e\",\"crumbRequestField\":\"Jenkins-Crumb\"}",
"content_type": "application/json;charset=UTF-8",
"date": "Fri, 03 Feb 2017 18:53:36 GMT",
"invocation": {
"module_args": {
"backup": null,
"body": null,
"body_format": "raw",
"content": null,
"creates": null,
"delimiter": null,
"dest": null,
"directory_mode": null,
"follow": false,
"follow_redirects": "safe",
"force": false,
"force_basic_auth": true,
"group": null,
"headers": {
"Authorization": "Basic YWRtaW46YWRtaW4="
},
"http_agent": "ansible-httpget",
"method": "GET",
"mode": null,
"owner": null,
"password": "admin",
"regexp": null,
"remote_src": null,
"removes": null,
"return_content": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"status_code": [
200
],
"timeout": 30,
"unsafe_writes": null,
"url": "http://cluster01:8080/crumbIssuer/api/json",
"url_password": "admin",
"url_username": "admin",
"use_proxy": true,
"user": "admin",
"validate_certs": true
},
"module_name": "uri"
},
"json": {
"_class": "hudson.security.csrf.DefaultCrumbIssuer",
"crumb": "ad67abc734af7eae279df5c68098a29e",
"crumbRequestField": "Jenkins-Crumb"
},
"msg": "OK (unknown bytes)",
"redirected": false,
"server": "Jetty(9.2.z-SNAPSHOT)",
"status": 200,
"url": "http://cluster01:8080/crumbIssuer/api/json",
"x_content_type_options": "nosniff",
"x_jenkins": "2.32.2",
"x_jenkins_session": "3abb7e45"
}
TASK [jenkins : Plugins are installed] *****************************************
task path: /Users/chris/Projects/ci/roles/jenkins/tasks/install_plugin.yml:15
Using module file /usr/local/Cellar/ansible/2.2.0.0_2/libexec/lib/python2.7/site-packages/ansible/modules/core/network/basics/uri.py
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r cluster01 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735 `" && echo ansible-tmp-1486148016.66-148559593691735="` echo $HOME/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735 `" ) && sleep 0'"'"''
<cluster01> PUT /var/folders/g5/h48p994d3qn7d9_nz7xv2lvh0000gn/T/tmp1RWIY4 TO /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735/uri.py
<cluster01> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r '[cluster01]'
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r cluster01 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735/ /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735/uri.py && sleep 0'"'"''
<cluster01> ESTABLISH SSH CONNECTION FOR USER: vagrant
<cluster01> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile=".vagrant/machines/cluster01/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/Users/chris/.ansible/cp/ansible-ssh-%h-%p-%r -tt cluster01 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qixxmffrdktqhuyukutskswbxfsaxrdd; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735/uri.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1486148016.66-148559593691735/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
fatal: [cluster01]: FAILED! => {
"cache_control": "must-revalidate,no-cache,no-store",
"changed": false,
"connection": "close",
"content": "<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"/>\n<title>Error 403 No valid crumb was included in the request</title>\n</head>\n<body><h2>HTTP ERROR 403</h2>\n<p>Problem accessing /pluginManager/installNecessaryPlugins. Reason:\n<pre> No valid crumb was included in the request</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>\n\n</body>\n</html>\n",
"content_length": "387",
"content_type": "text/html; charset=ISO-8859-1",
"date": "Fri, 03 Feb 2017 18:53:37 GMT",
"failed": true,
"invocation": {
"module_args": {
"backup": null,
"body": "<jenkins><install plugin=\"git#latest\" /></jenkins>",
"body_format": "raw",
"content": null,
"creates": "/var/jenkins_home/plugins/git",
"delimiter": null,
"dest": null,
"directory_mode": null,
"follow": false,
"follow_redirects": "safe",
"force": false,
"force_basic_auth": false,
"group": null,
"headers": {
"Content-Type": "text/xml",
"Jenkins-Crumb": "ad67abc734af7eae279df5c68098a29e"
},
"http_agent": "ansible-httpget",
"method": "POST",
"mode": null,
"owner": null,
"password": "admin",
"regexp": null,
"remote_src": null,
"removes": null,
"return_content": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"status_code": [
200
],
"timeout": 30,
"unsafe_writes": null,
"url": "http://cluster01:8080/pluginManager/installNecessaryPlugins",
"url_password": "admin",
"url_username": "admin",
"use_proxy": true,
"user": "admin",
"validate_certs": true
},
"module_name": "uri"
},
"msg": "Status code was not [200]: HTTP Error 403: No valid crumb was included in the request",
"redirected": false,
"server": "Jetty(9.2.z-SNAPSHOT)",
"status": 403,
"url": "http://cluster01:8080/pluginManager/installNecessaryPlugins",
"x_content_type_options": "nosniff"
}
I've pushed the whole build (Vagrant and Ansible) to github
I ran into this as well and found that you need to specify force_basic_auth: True in your uri task that installs the plugins. I see that you have it in the task that registers the crumb_token variable (name: Get Jenkins crumb), so you just need to add it to the name: Plugins are installed task.

Resources