Ansible Experts,
I am trying to create a play-book, that retrieves current configuration of Cisco routers and switches, then to save it to a file. I also need the file name to append with local datetime. Like ansible_date_time field in facts, i did not see any in routers/switches gather_facts data. So i tried to get this information from local host where i run ansible. As it is not included in the host inventory file, i had to add a localhost.
The problem i face is, i am successfully getting date_time from ansible_facts, however this registered value seems to be not working with filename. I am guessing it is due to the "when" statement in my tasks. Not sure.
TASK [Display Ansible date_time fact and register] *******************************************************************************************
skipping: [LAB_TKYAT07EXTRT03]
skipping: [LAB_TKYAT07EXTRT04]
ok: [localhost] => {
"msg": "2021-06-18"
}
skipping: [LAB_TKYAT07SPN01]
skipping: [LAB_TKYAT07SPN02]
skipping: [LAB_TKYAT07LF01]
skipping: [LAB_TKYAT07LF02]
skipping: [LAB_TKYAT07LF03]
skipping: [LAB_TKYAT07LF04]
My output filename looks "LAB_TKYAT07LF01{'skip_reason': u'Conditional result was False', 'skipped': True, 'changed': False}_showrunn.txt"
---
-
connection: network_cli
gather_facts: false
hosts: LAB_ALL
tasks:
-
name: gather facts from apps
setup:
gather_subset:
- hardware
when: ansible_connection == 'local'
tags: linux
-
name: "Display Ansible date_time fact and register"
debug:
msg: "{{ ansible_date_time.date }}"
when: ansible_connection == 'local'
register: currenttime
tags: linux
-
name: "retrieve show runn all from Nexus "
nxos_command:
commands:
- command: show runn all
register: nexus_output
when: ansible_network_os == 'nxos'
-
name: "Save Nexus output to outputs folder"
copy:
content: "{{ nexus_output.stdout[0] }}"
dest: "./outputs/{{ inventory_hostname }}{{ currenttime }}_showrunn.txt"
when: ansible_network_os == 'nxos'
-
name: "retrieve show runn all from Routers "
ios_command:
commands:
- command: show runn all
register: ios_output
when: ansible_network_os == 'ios'
-
name: "Save IOS output to outputs folder"
copy:
content: "{{ ios_output.stdout[0] }}"
dest: "./outputs/{{ inventory_hostname }}{{ currenttime }}_showrunn.txt"
when: ansible_network_os == 'ios'
Change your second task to:
-
name: "Display Ansible date_time fact and register"
delegate_to: localhost
run_once: yes
set_fact:
currenttime: "{{ ansible_date_time.date }}"
tags: linux
Although you are right: this would not explain why the following task are being skipped.
You can try to figure out which value is returned for a given hosts, using the setup plugin:
ansible -m setup my-nxos-target
Note that the docs would mention some cisco.nxos.nxos or cisco.ios.ios instead.
See:
https://docs.ansible.com/ansible/latest/network/user_guide/platform_nxos.html#enabling-nx-api ,
https://docs.ansible.com/ansible/latest/network/user_guide/platform_ios.html#example-cli-task
Related
I have a Ansible playbook that run by Jenkins on server A, all it does is to copy a file folder from server A to a remote server B:
The code looks like this:
- hosts: "{{ lookup('env', 'REMOTE_HOST') }}"
any_errors_fatal: true # fail all hosts if any host fails
become: true
become_method: sudo
become_user: root
gather_facts: yes
vars:
date_time: "{{ lookup('env', 'DATE_TIME') }}"
version_main: 10.7
sql_dir: "~/code/data/sql/sql_data/{{ version_main }}"
remote_files_directory: "~/deploy/{{ version_main }}/{{ date_time }}/files"
tasks:
- name: "Copy SQL files from {{ sql_dir }} to {{ remote_files_directory }} on the remote host"
become: yes
become_user: user_a
ansible.builtin.copy:
src: "{{ sql_dir }}"
dest: "{{ remote_files_directory }}"
remote_src: true
All the SQL files are on server A, under user_a's home directory: ~/code/data/sql/sql_data/{{ version_main }}, and I want to copy them to server B(REMOTE_HOST) under the same user_a's home: ~/deploy/{{ version_main }}/{{ date_time }}/files
Variables REMOTE_HOST, DATE_TIME are from Jenkins.
The error I am getting with remote_src: true is:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Source /home/user_a/code/data/sql/sql_data/10.7/ not found"}
If I set remote_src: false, I get this error:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Could not find or access '~/code/data/sql/sql_data/10.7' on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option"}
I even added the delegate_to: "{{ lookup('env', 'DEPLOY_DATAMONTH_HOST') }}" and it does not make any differences.
Somehow it can not figure out the source file folder on server A which is where the Ansible and Jenkins run.
This is the Ansible version I have on server A.
ansible --version
ansible [core 2.13.7]
Usser jenkins can not access /home/user_a/code/data/sql/sql_data/10.7/ directly, but jenkins can sudo su - user_a, so I think
become: yes
become_user: user_a
should have helped.
What am I still missing?
I believe that defining this at the host level with the following code block:
become: true
become_method: sudo
become_user: root
takes priority over your attempt to become user_a in the tasks with the following code block:
become: yes
become_user: user_a
I am having trouble checking if a docker swarm worker node has already joined a swarm on Ansible.
- name: Check if Worker has already joined
shell: docker node ls
register: swarm_status
ignore_errors: true
- name: Join Swarm
shell: shell: docker swarm join --token {{ hostvars[groups['leader'][0]]['worker_token']['stdout'] }} {{ hostvars[groups['leader'][0]]['ec2_public_ip']['stdout'] }}:2377
when: swarm_status.rc != 0
run_once: true
This doesn't work as swarm_status will always display error as worker cannot inspect self.
Thanks.
Edit: You can check from a manager node with docker_node_info. Debug the json file to find the information you need:
- name: Docker Node Info
docker_node_info:
name: worker
register: worker_status
- name: Debug
debug:
msg: "{{ worker_status }}"
Next, use json query to filter out the results using jmespath
- name:
debug:
msg: "{{ worker_status | json_query('nodes[*].Spec.Role')}}"
Output:
worker
I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?
I have a problem with the jenkins_plugins module.
Within a playbook that pull a jenkins docker image (jenkins/jenkins:lts-alpine) and runs it to install the instance and configure it, I have a task that installs a list of plugins on an instance, which is :
- name: Install plugins
jenkins_plugin:
owner: "{{ jenkins_process_user }}"
group: "{{ jenkins_process_group }}"
name: "{{ item }}"
state: latest
timeout: 120
url: "http://{{ jenkins_hostname }}:{{ jenkins_http_port }}{{ jenkins_url_prefix }}"
url_username: "{{ jenkins_admin_name }}"
url_password: "{{ jenkins_admin_password }}"
with_dependencies: yes
loop: "{{ jenkinsPlugins }}"
register: pluginResult
until: not pluginResult.failed
retries: 5
notify: restart_image
become: True
become_user: "{{ jenkins_process_user }}"
It works correctly when the playbook is run for the first time.
All plugins are installed, and possibly retried in case of problem.
But, when I relaunch exactly the same playbook, Each and every plugin installation is retried up to the max nbr of retry and fails with (for example):
failed: [devEnv] (item=blueocean) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": "blueocean", "msg": "Jenkins home directory doesn't exist."}
For sure, I have verified that the jenkins home directory actually exists and has the awaited "{{ jenkins_process_user }}" and
"{{ jenkins_process_group }}" owner and group, which is jenkins:jenkins.
Note that the docker container is bound to a local directory which belongs to jenkins:jenkins. To be sure uid and gid are the same on the local machine (a VM created with vagrant) and on the container, the uid:gid are forced to 1001:1001 when starting the container.
I also have checked that it actually the case.
I really cannot explain why I get this error, which clearly makes this playbook not idempotent !
I know that there is a way to install plugins via a shell script provided by Jenkins, but I'd like to stick with ansible playbook as far as possible.
For sure, I can give the whole playbook if you need additional information.
Thanks for your help.
J-L
Ok, I understand the problem.
Reading again the jenkins_plugins documentation, and looking at the jenkins_plugins module code, I found that installation and check already installed plugin version do not run the same code (two different alternatives of a test).
And the second one needs **JENKINS_HOME** to be defined, which is optional (defaults to /var/lib/jenkins) on the module parameters. I did not set it.
Well, it is actually /var/lib/jenkins on the container, but not on the docker controler machine, which is the ansible playbook target where it is /home/jenkins/jenkins_home.
So... This question is closed, unless someone has an additional information to give. You're welcome !
Best Regards.
My k8s with kubespray always bails out at the following error
"Too many nameservers. You can relax this check by set docker_dns_servers_strict=no and we will only use the first 3
In my cluster.yml I have this under - hosts
- docker_dns_servers_strict: no but I still get the error.
What am I missing?
For me, it worked with adding -e 'docker_dns_servers_strict=no':
ansible-playbook -i ../inventories/kubernetes.yaml --become --become-user=root cluster.yml -e 'docker_dns_servers_strict=no'
As explained here, check the format of your yaml file.
Here is one example:
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
- docker_dns_servers_strict: no
roles:
- { role: kubespray-defaults}
- { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
As mentioned in this issue:
This usually happens if you configure one set of dns servers on the servers before you run the kubespray role.
In my case I added docker_dns_servers_strict: false in the all.yaml file. It's solved my problem.
below worked for my installation by trimming the nameserver to max 6
added it in roles/container-engine/docker/tasks/set_facts_dns.yml
just below trim the nameserver
- name: rtrim number of numbers of search domain to 6
set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains[0:6] }}"
when: docker_dns_search_domains|length > 6