How exec sudo command via Publish Over SSH Plugin - jenkins

For example I have this code in my pipeline:
sshPublisher(
failOnError: true,
continueOnError: false,
publishers: [
sshPublisherDesc(
configName: 'some_config',
verbose: true,
transfers: [
sshTransfer(
sourceFiles: 'some_path/some_script.sh',
remoteDirectory: '/tmp',
removePrefix: 'some_path',
execCommand: 'sudo cp /tmp/some_script /usr/local/bin/some_script && sudo chmod a+x /usr/local/bin/some_script'
)
]
)
]
)
But during execution this code I have error:
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
Ssh config some_config content username and ssh private key.
How I can exec sudo commands?
If I use usePty then the process infinite wait password.

Related

state is present but all of the following are missing: source

I have Ansible script to build docker image:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
path: /
force: yes
tags: deb
and Dockerfile:
FROM ubuntu:bionic
RUN export DEBIAN_FRONTEND=noninteractive; \
apt-get -qq update && \
apt-get -qq install \
software-properties-common git curl wget openjdk-8-jre-headless debhelper devscripts
WORKDIR /workspace
when I run next command: ansible-playbook build.yml -vvv
I received next exception.
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/devpc/.ansible/tmp/ansible-tmp-1633701673.999151-517949-133730725910177/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"archive_path": null,
"build": null,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"force": true,
"force_absent": false,
"force_source": false,
"force_tag": false,
"load_path": null,
"name": "xroad-deb-bionic",
"path": "/",
"pull": null,
"push": false,
"repository": null,
"source": null,
"ssl_version": null,
"state": "present",
"tag": "latest",
"timeout": 60,
"tls": false,
"tls_hostname": null,
"use_ssh_client": false,
"validate_certs": false
}
},
"msg": "state is present but all of the following are missing: source"
}
Could you please give me a hint how to debug and understand what this error mean?
Thanks for your time and consideration.
The error says a source: key must be present in the docker_image: block.
More specifically, most things in Ansible default to state: present. When you request a docker_image to be present, there are a couple of ways to get it (pulling it from a registry, building it from source, unpacking it from a tar file). Which way to do this is specified by the source: control, but Ansible does not have a default value for this.
If you're building an image, you need to specify source: build. Having done that, there are also a set of controls under build:. In particular, the path to the Docker image context (probably not /) goes there and not directly under docker_image:.
This leaves you with something like:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
source: build # <-- add
build: # <-- add
path: /home/user/src/something # <-- move under build:
force: yes
tags: deb

Jenkins docker_login ansible playbook : Permission denied

I would like to copy docker container inside docker registry with Jenkins.
When I execute Ansible playbook i get :
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
I suppose that ansible is run under user jenkins because this link, and because of the log file:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
Because the ansible playbook try to do a docker_login, I understand that user jenkins need to be able to connect to docker.
So I add jenkins to a docker users :
I don't understand why the permission is denied
The whole log jenkins file:
TASK [Log into Docker registry]
************************************************
task path: /var/jenkins_home/workspace/.../build_docker.yml:8
Using module file /usr/lib/python2.7/dist-
packages/ansible/modules/core/cloud/docker/docker_login.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" && echo ansible-tmp-1543388409.78-179785864196502="` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFASoHo TO /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/ /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py; rm -rf "/var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"cacert_path": null,
"cert_path": null,
"config_path": "~/.docker/config.json",
"debug": false,
"docker_host": null,
"email": null,
"filter_logger": false,
"key_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "https://registry.docker....si",
"ssl_version": null,
"timeout": null,
"tls": null,
"tls_hostname": null,
"tls_verify": null,
"username": "jenkins"
},
"module_name": "docker_login"
},
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
}
to retry, use: --limit #/var/jenkins_home/workspace/.../build_docker.retry
The whole ansible playbook
---
- hosts: localhost
vars:
git_branch: "{{ GIT_BRANCH|default('development') }}"
tasks:
- name: Log into Docker registry
docker_login:
registry_url: https://registry.docker.....si
username: ...
password: ....
If anyone have the same problem I found the solution,...
My registry doesn't have valid HTTPS certificate. So, you need to add
{
"insecure-registries" : [ "https://registry.docker.....si" ]
}
inside /etc/docker/daemon.json

Jenkins 2.0: ansiblePlaybook plugin

I would like to execute a Jenkins pipeline:
stage('Deploy watchers') {
ansiblePlaybook(
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -e target=dev-dp-manager-1.
This execution leads to:
fatal: [dev-dp-manager-1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
According to the documentation I need to add sudo: true to make the ansible command execute with root privileges. If I do so:
stage('Deploy watchers') {
ansiblePlaybook(
sudo: true,
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -s -U root -e target=dev-dp-manager-1. Nevertheless I get the same error.
If I try to say sudo ansible-playbook ... my command succeeds.
My question is whether I can achieve the desired execution by using the plugin or I have to write the ansible command by hand?
Thanks!
What worked for me was:
stage('Deploy watchers') {
sh 'sudo ansible-playbook watcher-manage.yml --extra-vars="target=dev-dp-manager-1"'
}
Since the Jenkins' linux user doesn't need have access to ssh keys, simply lifting its permissions for this one command does the job.

Travis-CI does not add deploy section

I followed the Travis-CI documentation, to creating multiple deployments, and for notifications.
So this is my config: (the end has deploy and notifications)
sudo: required # is required to use docker service in travis
language: node_js
node_js:
- 'node'
services:
- docker
before_install:
- npm install -g yarn --cache-min 999999999
- "/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
# Use yarn for faster installs
install:
- yarn
# Init GUI
before_script:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- sleep 3 # give xvfb some time to start
script:
- npm run test:single-run
cache:
yarn: true
directories:
- ./node_modules
before_deploy:
- npm run build:backwards
- docker --version
- pip install --user awscli # install aws cli w/o sudo
- export PATH=$PATH:$HOME/.local/bin # put aws in the path
deploy:
- provider: script
script: scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
on:
branch: travis
- provider: script
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
on:
tags: true
notifications:
email: false
But this translates to (in Travis - view config): no deploy, no notifications
{
"sudo": "required",
"language": "node_js",
"node_js": "node",
"services": [
"docker"
],
"before_install": [
"npm install -g yarn --cache-min 999999999",
"/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
],
"install": [
"yarn"
],
"before_script": [
"export DISPLAY=:99.0",
"sh -e /etc/init.d/xvfb start",
"sleep 3"
],
"script": [
"npm run test:single-run"
],
"cache": {
"yarn": true,
"directories": [
"./node_modules"
]
},
"before_deploy": [
"npm run build:backwards",
"docker --version",
"pip install --user awscli",
"export PATH=$PATH:$HOME/.local/bin"
],
"group": "stable",
"dist": "trusty",
"os": "linux"
}
Try changing
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
to
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
This will give a detailed result if the script is being executed or not. Also I looked into the build after those changes. It fails on below
Step 4/9 : COPY ./dist /opt/ansyn/app
You need to change your deploy section to
deploy:
- provider: script
script: sh -x scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
skip_cleanup: true
on:
branch: travis
- provider: script
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
skip_cleanup: true
on:
tags: true
So that the dist folder is there during deploy and not cleaned up

ansible - cisco IOS and "reload" command

I would like to send command "reload in " to Cisco IOS, but that specific command needs to be confirmed like below:
#reload in 30
Reload scheduled in 30 minutes by admin on vty0 (192.168.253.15)
Proceed with reload? [confirm]
It semms like ios_command module doesn't handle such case.
My configuration:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands: reload in 30
commands: y
provider: "{{ cli }}"
And response from playbook:
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test1.yml:14
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" && echo ansible-tmp-1476454008.17-103724241654271="` echo $HOME/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpAJiZR2 TO /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/ios_command; rm -rf "/root/.ansible/tmp/ansible-tmp-1476454008.17-103724241654271/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "commands": ["y"], "failed": true, "invocation": {"module_args": {"auth_pass": null, "authorize": false, "commands": ["y"], "host": "192.168.0.33", "interval": 1, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "provider": "{'username': 'admin', 'host': '192.168.0.33', 'password': '********'}", "retries": 10, "ssh_keyfile": null, "timeout": 10, "username": "admin", "waitfor": null}, "module_name": "ios_command"}, "msg": "matched error in response: y\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nsw7.test.lab#"}
How can I handle this?
updated:
If I try to use expect module in YAML file like this:
name: some tests
hosts: sw-test
gather_facts: False
# connection: local
tasks:
- name: do reload in case of "catting off"
expect:
command: reload in 30
responses:
'Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\nProceed with reload? \[confirm\]' : y
echo: yes
But there is a problem with connection:
oot#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" && echo ansible-tmp-1476882070.37-92402455055985="` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmp30wGsF TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882070.37-92402455055985 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c ssh
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.0.33> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.33 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" && echo ansible-tmp-1476882145.78-139203779538157="` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"'"''
<192.168.0.33> PUT /tmp/tmpY5qqyW TO "` echo $HOME/.ansible/tmp/ansible-tmp-1476882145.78-139203779538157 `" ) && sleep 0'"/expect
<192.168.0.33> SSH: EXEC sshpass -d12 sftp -o BatchMode=no -b - -C -vvv -o
ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.0.33]'
fatal: [192.168.0.33]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=1 failed=0
root#Kali:/etc/ansible# ansible-playbook test3 -u admin -k -vvvv -c local
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback default of type stdout, v2.0
PLAYBOOK: test3 ****************************************************************
1 plays in test3
PLAY [some tests] **************************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /etc/ansible/test3:9
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" && echo ansible-tmp-1476882426.62-172601217553809="` echo $HOME/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmpdq1pYy TO /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/ /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect && sleep 0'
<192.168.0.33> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/expect; rm -rf "/root/.ansible/tmp/ansible-tmp-1476882426.62-172601217553809/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"chdir": null, "command": "reload in 30", "creates": null, "echo": true, "removes": null, "responses": {"Reload scheduled in 30 minutes by admin on vty0 (192.168.253.20)\\nProceed with reload? \\[confirm\\]": "y"}, "timeout": 30}, "module_name": "expect"}, "msg": "The command was not found or was not executable: reload."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/etc/ansible/test3.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
UPDATED
I've installed ansible 2.3 and tried as follows:
tasks:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
But still, I get an error. I think that this is because ios module always wait for a promt as a response. And additionaly confirmation of reload command is without "Enter" after pressing "y" so this could be another problem.
$ sudo ansible-playbook test1.yml -vvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: test1.yml ************************************************************
1 plays in test1.yml
PLAY [testowe dzialania] *******************************************************
TASK [do reload in case of "catting off"] **************************************
task path: /home/user1/test1.yml:13
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<192.168.0.33> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.33> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" && echo ansible-tmp-1477557527.56-157304653717324="` echo $HOME/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324 `" ) && sleep 0'
<192.168.0.33> PUT /tmp/tmphf8EWO TO /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py
<192.168.0.33> EXEC /bin/sh -c 'chmod u+x /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py && sleep 0'
<192.168.0.33> EXEC /bin/sh -c '/usr/bin/python /home/mszczesniak/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/ios_command.py; rm -rf "/home/user1/.ansible/tmp/ansible-tmp-1477557527.56-157304653717324/" > /dev/null 2>&1 && sleep 0'
fatal: [192.168.0.33]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"reload in 30",
"y"
],
"host": "192.168.0.33",
"interval": 1,
"match": "all",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": null,
"provider": {
"host": "192.168.0.33",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "admin"
},
"retries": 10,
"ssh_keyfile": null,
"timeout": 10,
"transport": null,
"use_ssl": true,
"username": "admin",
"validate_certs": true,
"wait_for": [
"result[0] contains \"Proceed with reload\""
]
},
"module_name": "ios_command"
},
"msg": "timeout trying to send command: reload in 30\r"
}
to retry, use: --limit #/home/user1/test1.retry
PLAY RECAP *********************************************************************
192.168.0.33 : ok=0 changed=0 unreachable=0 failed=1
Can anyone have any idea how to resolve that problem in ansible or maby the only way is to use pure python script or write own ansible module?
You can use:
- name: reload device
ios_command:
commands:
- "reload in 1\ny"
provider: "{{ cli }}"
This will reload the device in 1 minute and the reload prompt gets accepted. It works well for ansible because the default prompt of ios will come back (reload gets triggered in 1 minute).
Regards,
Simon
commands parameter of ios_command module expects a YAML formated list of commands. However in the code example provided commands parameter is set multiple times. Try the ios_command task like this:
- name: do reload in case of "catting off"
ios_command:
commands:
- reload in 30
- y
provider: "{{ cli }}"
Ansible 2.2 only
You could use something like this:
- name: send reload command inc confirmation
ios_command:
commands:
- reload in 30
- y
wait_for:
- result[0] contains "Proceed with reload"
provider: "{{ cli }}"
Not tested but similar to last example for ios_command module.
Take care with Ansible 2.2 though, it's not released yet and new releases of Ansible can have significant regressions.
Ansible 2.0+ includes the expect module but that requires Python on the remote device, so it won't work on IOS or similar devices.
It appears that the simplest method would be to use the 'raw' module to send raw SSH commands to the device.
This avoids having to use expect and having to play around with the ios_command module.
The raw module will run the commands without caring what responses or prompts the device.
Below worked for me with ansible-playbook 2.9.0 and Python 3.7. Please note that, on line with - command, make sure to use double quote " instead of single one '. And don't forget to put \n at the end of command.
- name: Reloading switch using ios_command.
ios_command:
commands:
- command: "reload\n"
prompt: 'Proceed with reload? [confirm]'
answer: "\r"
I have simiar problem. Need to reload cisco device and then getting prompts:
save?
[confirm]
How to answer that correctly ?
name: Reloading in 1 min if not online
cisco.ios.ios_command:
commands:
- command: reload in 1
prompt: 'System configuration has been modified. Save? [yes/no]:'
answer: 'n'
answer: 'y'

Resources