I have a Ansible playbook that run by Jenkins on server A, all it does is to copy a file folder from server A to a remote server B:
The code looks like this:
- hosts: "{{ lookup('env', 'REMOTE_HOST') }}"
any_errors_fatal: true # fail all hosts if any host fails
become: true
become_method: sudo
become_user: root
gather_facts: yes
vars:
date_time: "{{ lookup('env', 'DATE_TIME') }}"
version_main: 10.7
sql_dir: "~/code/data/sql/sql_data/{{ version_main }}"
remote_files_directory: "~/deploy/{{ version_main }}/{{ date_time }}/files"
tasks:
- name: "Copy SQL files from {{ sql_dir }} to {{ remote_files_directory }} on the remote host"
become: yes
become_user: user_a
ansible.builtin.copy:
src: "{{ sql_dir }}"
dest: "{{ remote_files_directory }}"
remote_src: true
All the SQL files are on server A, under user_a's home directory: ~/code/data/sql/sql_data/{{ version_main }}, and I want to copy them to server B(REMOTE_HOST) under the same user_a's home: ~/deploy/{{ version_main }}/{{ date_time }}/files
Variables REMOTE_HOST, DATE_TIME are from Jenkins.
The error I am getting with remote_src: true is:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Source /home/user_a/code/data/sql/sql_data/10.7/ not found"}
If I set remote_src: false, I get this error:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Could not find or access '~/code/data/sql/sql_data/10.7' on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option"}
I even added the delegate_to: "{{ lookup('env', 'DEPLOY_DATAMONTH_HOST') }}" and it does not make any differences.
Somehow it can not figure out the source file folder on server A which is where the Ansible and Jenkins run.
This is the Ansible version I have on server A.
ansible --version
ansible [core 2.13.7]
Usser jenkins can not access /home/user_a/code/data/sql/sql_data/10.7/ directly, but jenkins can sudo su - user_a, so I think
become: yes
become_user: user_a
should have helped.
What am I still missing?
I believe that defining this at the host level with the following code block:
become: true
become_method: sudo
become_user: root
takes priority over your attempt to become user_a in the tasks with the following code block:
become: yes
become_user: user_a
Related
I want to create a file, directly in a container directory.
I created a directory before:
- name: create private in container
ansible.builtin.file:
path: playcontainer:/etc/ssl/private/
state: directory
mode: 0755
But it doesn´t let me create a file in that directory
- name: openssl key
openssl_privatekey:
path: playcontainer:/etc/ssl/private/playkey.key
size: "{{ key_size }}"
type: "{{ key_type }}"`
What am I missing?
From scratch full example to interact with a container from ansible.
Please note that this is not always what you want to do. In this specific case, unless if testing an ansible role for example, the key should be written inside the image at build time when running your Dockerfile, or bind mounted from host at container start. You should not mess with the container filesystem once started on production.
First we create a container for our test:
docker run -d --rm --name so_example python:latest sleep infinity
Now we need an inventory to target that container (inventories/default/main.yml)
---
all:
vars:
ansible_connection: docker
hosts:
so_example:
Finally a test playbook.yml to achieve your goal:
---
- hosts: all
gather_facts: false
vars:
key_path: /etc/ssl/private
key_size: 4096
key_type: RSA
tasks:
- name: Make sure package requirements are met
apt:
name: python3-pip
state: present
- name: Make sure python requirements are met
pip:
name: cryptography
state: present
- name: Create private directory
file:
path: "{{ key_path }}"
state: directory
owner: root
group: root
mode: 0750
- name: Create a key
openssl_privatekey:
path: "{{ key_path }}/playkey.key"
size: "{{ key_size }}"
type: "{{ key_type }}"
Running the playbook gives:
$ ansible-playbook -i inventories/default/ playbook.yml
PLAY [all] *****************************************************************************************************************************************************************************************
TASK [Make sure package requirements are met] ******************************************************************************************************************************************************
changed: [so_example]
TASK [Make sure python requirements are met] *******************************************************************************************************************************************************
changed: [so_example]
TASK [Create private directory] ********************************************************************************************************************************************************************
changed: [so_example]
TASK [Create a key] ********************************************************************************************************************************************************************************
changed: [so_example]
PLAY RECAP *****************************************************************************************************************************************************************************************
so_example : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We can now check that the file is there
$ docker exec so_example ls -l /etc/ssl/private
total 5
-rw------- 1 root root 3243 Sep 15 13:28 playkey.key
$ docker exec so_example head -2 /etc/ssl/private/playkey.key
-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA6xrz5kQuXbd59Bq0fqnwJ+dhkcHWCMh4sZO6UNCfodve7JP0
Clean-up:
docker stop so_example
I have an ansible playbook to create new users and create a docker for each user. The information of the users is gathered from a yaml file there are more that 200 records. The yaml file consists of username and password. To create docker container I have to give PUID and PGID of the user to the docker run command. See below for example command
docker run --restart=always -it --init -td -p {port from ansible}:8443 -e PUID={PUID from ansible} -e PGID={PGID from Ansible} linuxserver/code-server:latest
I want to get PUID and PGID of a user and register it to a variable to use them to create docker container. I tried to use the following command but since it appends the output to a variable dictionary, I am not able to match the username with PUID/PGID.
- name: GET PUID
shell: id -u "{{ item.username }}"
loop: "{{ user }}"
register: puid
- name: pgid Variable
debug: msg="{{ pgid.stdout }}"
YAML for user
user:
- username: john.doe
password: password
The docker image that I want to use: https://hub.docker.com/r/linuxserver/code-server
For example, get the UID and GID of user admin at test_11
shell> ssh admin#test_11 id admin
uid=1001(admin) gid=1001(admin) groups=1001(admin)
Use the module getent. The playbook below
- hosts: test_11
tasks:
- getent:
database: passwd
- set_fact:
admin_uid: "{{ ansible_facts.getent_passwd.admin.1 }}"
admin_gid: "{{ ansible_facts.getent_passwd.admin.2 }}"
- debug:
msg: |
admin_uid: {{ admin_uid }}
admin_gid: {{ admin_gid }}
gives (abridged)
TASK [debug] ********************************************************
ok: [test_11] =>
msg: |-
admin_uid: 1001
admin_gid: 1001
Ansible Experts,
I am trying to create a play-book, that retrieves current configuration of Cisco routers and switches, then to save it to a file. I also need the file name to append with local datetime. Like ansible_date_time field in facts, i did not see any in routers/switches gather_facts data. So i tried to get this information from local host where i run ansible. As it is not included in the host inventory file, i had to add a localhost.
The problem i face is, i am successfully getting date_time from ansible_facts, however this registered value seems to be not working with filename. I am guessing it is due to the "when" statement in my tasks. Not sure.
TASK [Display Ansible date_time fact and register] *******************************************************************************************
skipping: [LAB_TKYAT07EXTRT03]
skipping: [LAB_TKYAT07EXTRT04]
ok: [localhost] => {
"msg": "2021-06-18"
}
skipping: [LAB_TKYAT07SPN01]
skipping: [LAB_TKYAT07SPN02]
skipping: [LAB_TKYAT07LF01]
skipping: [LAB_TKYAT07LF02]
skipping: [LAB_TKYAT07LF03]
skipping: [LAB_TKYAT07LF04]
My output filename looks "LAB_TKYAT07LF01{'skip_reason': u'Conditional result was False', 'skipped': True, 'changed': False}_showrunn.txt"
---
-
connection: network_cli
gather_facts: false
hosts: LAB_ALL
tasks:
-
name: gather facts from apps
setup:
gather_subset:
- hardware
when: ansible_connection == 'local'
tags: linux
-
name: "Display Ansible date_time fact and register"
debug:
msg: "{{ ansible_date_time.date }}"
when: ansible_connection == 'local'
register: currenttime
tags: linux
-
name: "retrieve show runn all from Nexus "
nxos_command:
commands:
- command: show runn all
register: nexus_output
when: ansible_network_os == 'nxos'
-
name: "Save Nexus output to outputs folder"
copy:
content: "{{ nexus_output.stdout[0] }}"
dest: "./outputs/{{ inventory_hostname }}{{ currenttime }}_showrunn.txt"
when: ansible_network_os == 'nxos'
-
name: "retrieve show runn all from Routers "
ios_command:
commands:
- command: show runn all
register: ios_output
when: ansible_network_os == 'ios'
-
name: "Save IOS output to outputs folder"
copy:
content: "{{ ios_output.stdout[0] }}"
dest: "./outputs/{{ inventory_hostname }}{{ currenttime }}_showrunn.txt"
when: ansible_network_os == 'ios'
Change your second task to:
-
name: "Display Ansible date_time fact and register"
delegate_to: localhost
run_once: yes
set_fact:
currenttime: "{{ ansible_date_time.date }}"
tags: linux
Although you are right: this would not explain why the following task are being skipped.
You can try to figure out which value is returned for a given hosts, using the setup plugin:
ansible -m setup my-nxos-target
Note that the docs would mention some cisco.nxos.nxos or cisco.ios.ios instead.
See:
https://docs.ansible.com/ansible/latest/network/user_guide/platform_nxos.html#enabling-nx-api ,
https://docs.ansible.com/ansible/latest/network/user_guide/platform_ios.html#example-cli-task
hi i am trying to install cloud-kitty service using open stack kolla-ansible , during the installation when it try to pull container / images from dockerhub it throw error The container image not found cloudkitty-api:ussuri , when i check on docker hub the required image not present there but there are
same images present with this name objectiflibre/cloudkitty-api .what/how should i change in kolla-ansible so that it pull specify image for cloudkitty.
I try to change the cloudkitty default/main.yml but i dont understand how should i change or what should i change in this.
/root/kolla-openstack/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml
####################
# Docker
####################
cloudkitty_install_type: "{{ kolla_install_type }}"
cloudkitty_tag: "{{ openstack_tag }}"
cloudkitty_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ cloudkitty_install_type }}-cloudkitty-api"
cloudkitty_api_tag: "{{ cloudkitty_tag }}"
cloudkitty_api_image_full: "{{ cloudkitty_api_image }}:{{ cloudkitty_api_tag }}"
my globals.yml
###############
# Kolla options
###############
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
config_strategy: "COPY_ALWAYS"
# Valid options are ['centos', 'debian', 'rhel', 'ubuntu']
kolla_base_distro: "ubuntu"
# Valid options are [ binary, source ]
kolla_install_type: "binary"
# Do not override this unless you know what you are doing.
openstack_release: "ussuri"
# Docker image tag used by default.
#openstack_tag: "{{ openstack_release ~ openstack_tag_suffix }}"
enable_cloudkitty: "yes"
enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
cloudkitty_collector_backend: "gnocchi"
cloudkitty_storage_backend: "influxdb"
I would try first to pull the image to my local repo first:
#docker pull objectiflibre/cloudkitty-api
then try to set the correct variables in :
/root/kolla-openstack/share/kolla-ansible/ansible/group_vars/all.yml
docker_registry
docker_namespace
kolla_base_distro
cloudkitty_install_type
I have a problem with the jenkins_plugins module.
Within a playbook that pull a jenkins docker image (jenkins/jenkins:lts-alpine) and runs it to install the instance and configure it, I have a task that installs a list of plugins on an instance, which is :
- name: Install plugins
jenkins_plugin:
owner: "{{ jenkins_process_user }}"
group: "{{ jenkins_process_group }}"
name: "{{ item }}"
state: latest
timeout: 120
url: "http://{{ jenkins_hostname }}:{{ jenkins_http_port }}{{ jenkins_url_prefix }}"
url_username: "{{ jenkins_admin_name }}"
url_password: "{{ jenkins_admin_password }}"
with_dependencies: yes
loop: "{{ jenkinsPlugins }}"
register: pluginResult
until: not pluginResult.failed
retries: 5
notify: restart_image
become: True
become_user: "{{ jenkins_process_user }}"
It works correctly when the playbook is run for the first time.
All plugins are installed, and possibly retried in case of problem.
But, when I relaunch exactly the same playbook, Each and every plugin installation is retried up to the max nbr of retry and fails with (for example):
failed: [devEnv] (item=blueocean) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": "blueocean", "msg": "Jenkins home directory doesn't exist."}
For sure, I have verified that the jenkins home directory actually exists and has the awaited "{{ jenkins_process_user }}" and
"{{ jenkins_process_group }}" owner and group, which is jenkins:jenkins.
Note that the docker container is bound to a local directory which belongs to jenkins:jenkins. To be sure uid and gid are the same on the local machine (a VM created with vagrant) and on the container, the uid:gid are forced to 1001:1001 when starting the container.
I also have checked that it actually the case.
I really cannot explain why I get this error, which clearly makes this playbook not idempotent !
I know that there is a way to install plugins via a shell script provided by Jenkins, but I'd like to stick with ansible playbook as far as possible.
For sure, I can give the whole playbook if you need additional information.
Thanks for your help.
J-L
Ok, I understand the problem.
Reading again the jenkins_plugins documentation, and looking at the jenkins_plugins module code, I found that installation and check already installed plugin version do not run the same code (two different alternatives of a test).
And the second one needs **JENKINS_HOME** to be defined, which is optional (defaults to /var/lib/jenkins) on the module parameters. I did not set it.
Well, it is actually /var/lib/jenkins on the container, but not on the docker controler machine, which is the ansible playbook target where it is /home/jenkins/jenkins_home.
So... This question is closed, unless someone has an additional information to give. You're welcome !
Best Regards.